The Risk of Nuclear War Between India and Pakistan

Photo from the original article.

The Pink Flamingo On the Subcontinent: Nuclear War Between India and Pakistan

Frank Hoffman has recently coined the term “Pink Flamingo,” which refers to fully visible events and situations that are ignored and then yield catastrophic results. In a recent article about the India/Pakistan situation, David Barno and Nora Bensahel argue that “the current nuclear standoff between India and Pakistan may be the most dangerous pink flamingo in today’s world.” With a shared border, increasing stockpiles of nuclear weapons, and a history of war, the possibility of nuclear conflict between India and Pakistan is very real.

In response to India’s military doctrine called ‘Cold Start’, a war option created to deter Islamabad from sponsoring attacks against New Delhi, Pakistan has not renounced the first use of nuclear weapons for defensive purposes. Fueled by fear and a struggle for power, Pakistan only continues to accelerate its nuclear weapons program, hoping to have more than 200 nuclear warheads by the year 2020. The fusion of rising political tensions and a vastly growing stockpile of nuclear weapons are both leading to an increased risk of nuclear escalation and a destabilization of the area.

Barno and Bensahel believe that while there may not be much the United States or the world can do to stem this conflict it is nevertheless important that we begin to devote ever increasing time and energy to resolving it. Some small steps may help to generate confidence between the two sides and open up dialog to discuss conflict resolution options for future situations. Furthermore, Barno and Bensahel believe that the United States should sponsor tabletop exercises to explore how the escalation of a nuclear conflict could unfold. With millions of lives on the line and the integrity of the environment at stake it is rational to make efforts to avert the worst possible outcomes from this pink flamingo in front of us all.  

Read the full article here.

Eric Schlosser on Nuclear Weapons

Hiroshima after the atomic bomb destroyed the city.

“…if we believe that the spread of nuclear weapons is inevitable, then in some way we are admitting to ourselves that the use of nuclear weapons is inevitable.” –Barack Obama, 2009

Today’s Nuclear Dilemma

In a paper just released by the Bulletin of the Atomic Scientists entitled, Today’s nuclear dilemma, Eric Schlosser considers the possible consequences of a new nuclear arms race, which now appears to be underway.

After the Cold War ended, most of the world supported calls to reduce nuclear arsenals, and such a reduction did occur. Even as recently as 2009, the United States supported a further decrease in nuclear weapons. However, today’s foreign policies are now driving countries with nuclear arsenals to look toward a modernization of their weapons. China, France, the United Kingdom, Israel, Pakistan, India, North Korea, Russia and the United States are increasing their stockpiles of nuclear missiles and warheads. The United States alone is planning to spend $1 trillion on upgrading its nuclear weapons over the next 30 years.

A major argument in favor nuclear weapons is one of deterrence. Yet studies indicate that today’s tactical non-nuclear weapons are far more effective at achieving military goals than a nuclear strike would be. At the same time, religious ideologies today can favor mass destruction and martyrdom, which could increase the chances of a nuclear weapon’s use.

With so many weapons around the globe, an intentional nuclear attack may not be the greatest risk. Nuclear history is rife with accidents and close calls, which could have inadvertently killed millions of people or launched a global nuclear war. This threat could become an even greater risk as countries continue to add more advanced nuclear weapons to their arsenals.

As horrifying as a nuclear war might be, it could pale in comparison to the threat of a nuclear winter. According to Schlosser, “A relatively small-scale nuclear exchange between India and Pakistan, involving about a hundred weapons, could cause a global ‘nuclear winter’ and kill more than a billion people.”

Schlosser concludes his paper with the idea that though the fear of nuclear war is negligible today, “the danger is far greater” than it has been in the last 70 years.

Read Schlosser’s full article here.

Schlosser in an investigative journalist most famous for his book, Fast Food Nation, but he has more recently been exploring the dangers of our current, worldwide nuclear weapons situation. In 2013, he published Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety, which was a Pulitzer Prize Finalist.

What You Should Really Be Scared of on Halloween

It was four days before Halloween and the spirits were tense, both those above and those lurking in the waters below. There was agitation and busy preparation everywhere, and a sense of gloom and doom was weighing heavily on everyone’s minds. Deep in the waters the heat was rising, and the lost ones were finding no rest. Provoked by the world above, they were ready to unleash their curse. Had the time come for the world as they knew it to end?

It was indeed four days before Halloween: October 27, 1962. The spirits were tense, both those above, in the eleven US Navy destroyers and the aircraft carrier USS Randolph, and those lurking down in the waters below in the nuclear-armed Soviet submarine B-59. There was agitation and busy preparation everywhere due to the Cuban Missile Crisis, and a sense of gloom and doom was weighing heavily on everyone’s minds. Deep in the waters the heat rose past 45ºC (113ºF) as the submarine’s batteries were running out and the air-conditioning had stopped. On the verge of carbon dioxide poisoning, many crew members fainted. The crew was feeling lost and unsettled, as there had been no contact with Moscow for days and they didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges at them. “We thought – that’s it – the end”, crewmember V.P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.”

The world above was blissfully unaware that Captain Savitski decided to launch the nuclear torpedo. Valentin Grigorievich, the torpedo officer, exclaimed: “We will die, but we will sink them all – we will not disgrace our Navy!” In those brief moments it looked like the time may have come for the world as was known to end, creating more ghosts than Halloween had ever known.

Luckily for us, the decision to launch had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no. The chilling thought of how close we humans were to destroying everything we cherish makes this the scariest Halloween story. Like a really good Halloween story, this one has not a happy ending, but a suspenseful one in which we’ve only barely avoided the curse, and the danger remains with us. And like the very best Halloween stories, this one grew ever scarier over the years, as scientists came to realize that a dark smoky Halloween cloud might enshroud Earth for ten straight Halloweens, causing a decade-long nuclear winter producing not millions but billions of ghosts.

Right now, we humans have over 15,000 nuclear weapons, most of which are over a hundred times more powerful than those that obliterated Hiroshima and Nagasaki. Many of these weapons are kept on hair-trigger alert, ready to launch within minutes, increasing the risk of World War III starting by accident just as on that fateful Halloween 53 years ago. As more Halloweens pass, we accumulate more harrowing close calls, more near-encounters with the ghosts.

This Halloween you might want to get spooked by watching an explosion, read about the blood-curdling nuclear war close calls we’ve had in the past decades, and then hopefully you will do something to keep the curse away, in the hope that one Halloween we’ll be able to say: nuclear war – nevermore.

This article can also be found on the Huffington Post and on MeiasMusings.

From Physics Today: China’s no-first-use nuclear policy

“China’s entire nuclear weapons posture, and its relatively small arsenal of about 250 warheads, is based on its pledge of no first use, according to Pan Zhenqiang, former director of strategic studies at China’s National Defense University.
Although that pledge is “extremely unlikely” to change, missile defense, space-based weapons, or other new technologies that threaten the credibility of China’s deterrent could lead to a policy shift and a buildup of its nuclear stockpile, said Pan, who is also a retired major general in the People’s Liberation Army.”

Read the full story here.

FHI: Putting Odds on Humanity’s Extinction

Putting Odds on Humanity’s Extinction
The Team Tasked With Predicting-and Preventing-Catastrophe
by Carinne Piekema
May 13, 2015

Bookmark and Share

Not long ago, I drove off in my car to visit a friend in a rustic village in the English countryside. I didn’t exactly know where to go, but I figured it didn’t matter because I had my navigator at the ready. Unfortunately for me, as I got closer, the GPS signal became increasingly weak and eventually disappeared. I drove around aimlessly for a while without a paper map, cursing my dependence on modern technology.


It may seem gloomy to be faced with a graph that predicts the
potential for extinction, but the FHI researchers believe it can
stimulate people to start thinking—and take action.

But as technology advances over the coming years, the consequences of it failing could be far more troubling than getting lost. Those concerns keep the researchers at the Future of Humanity Institute (FHI) in Oxford occupied—and the stakes are high. In fact, visitors glancing at the white boards surrounding the FHI meeting area would be confronted by a graph estimating the likelihood that humanity dies out within the next 100 years. Members of the Institute have marked their personal predictions, from some optimistic to some seriously pessimistic views estimating as high as a 40% chance of extinction. It’s not just the FHI members: at a conference held in Oxford some years back, a group of risk researchers from across the globe suggested the likelihood of such an event is 19%. “This is obviously disturbing, but it still means that there would be 81% chance of it not happening,” says Professor Nick Bostrom, the Institute’s director.

That hope—and challenge—drove Bostrom to establish the FHI in 2005. The Institute is devoted precisely to considering the unintended risks our technological progress could pose to our existence. The scenarios are complex and require forays into a range of subjects including physics, biology, engineering, and philosophy. “Trying to put all of that together with a detailed attempt to understand the capabilities of what a more mature technology would unleash—and performing ethical analysis on that—seemed like a very useful thing to do,” says Bostrom.

Far from being bystanders in the face
of apocalypse, the FHI researchers are
working hard to find solutions.

In that view, Bostrom found an ally in British-born technology consultant and author James Martin. In 2004, Martin had donated approximately $90 million US dollars—one of the biggest single donations ever made to the University of Oxford—to set up the Oxford Martin School. The school’s founding aim was to address the biggest questions of the 21st Century, and Bostrom’s vision certainly qualified. The FHI became part of the Oxford Martin School.

Before the FHI came into existence, not much had been done on an organised scale to consider where our rapid technological progress might lead us. Bostrom and his team had to cover a lot of ground. “Sometimes when you are in a field where there is as yet no scientific discipline, you are in a pre-paradigm phase: trying to work out what the right questions are and how you can break down big, confused problems into smaller sub-problems that you can then do actual research on,” says Bostrom.

Though the challenge might seem like a daunting task, researchers at the Institute have a host of strategies to choose from. “We have mathematicians, philosophers, and scientists working closely together,” says Bostrom. “Whereas a lot of scientists have kind of only one methodology they use, we find ourselves often forced to grasp around in the toolbox to see if there is some particular tool that is useful for the particular question we are interested in,” he adds. The diverse demands on their team enable the researchers to move beyond “armchair philosophising”—which they admit is still part of the process—and also incorporate mathematical modelling, statistics, history, and even engineering into their work.

“We can’t just muddle through and learn
from experience and adapt. We have to
anticipate and avoid existential risk.
We only have one chance.”
– Nick Bostrom

Their multidisciplinary approach turns out to be incredibly powerful in the quest to identify the biggest threats to human civilisation. As Dr. Anders Sandberg, a computational neuroscientist and one of the senior researchers at the FHI explains: “If you are, for instance, trying to understand what the economic effects of machine intelligence might be, you can analyse this using standard economics, philosophical arguments, and historical arguments. When they all point roughly in the same direction, we have reason to think that that is robust enough.”

The end of humanity?

Using these multidisciplinary methods, FHI researchers are finding that the biggest threats to humanity do not, as many might expect, come from disasters such as super volcanoes, devastating meteor collisions or even climate change. It’s much more likely that the end of humanity will follow as an unintended consequence of our pursuit of ever more advanced technologies. The more powerful technology gets, the more devastating it becomes if we lose control of it, especially if the technology can be weaponized. One specific area Bostrom says deserves more attention is that of artificial intelligence. We don’t know what will happen as we develop machine intelligence that rivals—and eventually surpasses—our own, but the impact will almost certainly be enormous. “You can think about how the rise of our species has impacted other species that existed before—like the Neanderthals—and you realise that intelligence is a very powerful thing,” cautions Bostrom. “Creating something that is more powerful than the human species just seems like the kind of thing to be careful about.”


Nick Bostrom, Future of Humanity Institute Director

Far from being bystanders in the face of apocalypse, the FHI researchers are working hard to find solutions. “With machine intelligence, for instance, we can do some of the foundational work now in order to reduce the amount of work that remains to be done after the particular architecture for the first AI comes into view,” says Bostrom. He adds that we can indirectly improve our chances by creating collective wisdom and global access to information to allow societies to more rapidly identify potentially harmful new technological advances. And we can do more: “There might be ways to enhance biological cognition with genetic engineering that could make it such that if AI is invented by the end of this century, might be a different, more competent brand of humanity ,” speculates Bostrom.

Perhaps one of the most important goals of risk researchers for the moment is to raise awareness and stop humanity from walking headlong into potentially devastating situations. And they are succeeding. Policy makers and governments around the globe are finally starting to listen and actively seek advice from researchers like those at the FHI. In 2014 for instance, FHI researchers Toby Ord and Nick Beckstead wrote a chapter for the Chief Scientific Adviser’s annual report setting out how the government in the United Kingdom should evaluate and deal with existential risks posed by future technology. But the FHI’s reach is not limited to the United Kingdom. Sandberg was on the advisory board of the World Economic Forum to give guidance on the misuse of emerging technologies for the report that concludes a decade of global risk research published this year.

Despite the obvious importance of their work the team are still largely dependent on private donations. Their multidisciplinary and necessarily speculative work does not easily fall into the traditional categories of priority funding areas drawn up by mainstream funding bodies. In presentations, Bostrom has been known to show a graph that depicts academic interest for various topics, from dung beetles and Star Trek to zinc oxalate, which all appear to receive far greater credit than the FHI’s type of research concerning the continued existence of humanity. Bostrom laments this discrepancy between stakes and attention: “We can’t just muddle through and learn from experience and adapt. We have to anticipate and avoid existential risk. We only have one chance.”


“Creating something that is more powerful than the human
species just seems like the kind of thing to be careful about.”

It may seem gloomy to be faced every day with a graph that predicts the potential disasters that could befall us over the coming century, but instead, the researchers at the FHI believe that such a simple visual aid can stimulate people to face up to the potentially negative consequences of technological advances.

Despite being concerned about potential pitfalls, the FHI researchers are quick to agree that technological progress has made our lives measurably better over the centuries, and neither Bostrom nor any of the other researchers suggest we should try to stop it. “We are getting a lot of good things here, and I don’t think I would be very happy living in the Middle Ages,” says Sandberg, who maintains an unflappable air of optimism. He’s confident that we can foresee and avoid catastrophe. “We’ve solved an awful lot of other hard problems in the past,” he says.

Technology is already embedded throughout our daily existence and its role will only increase in the coming years. But by helping us all face up to what this might mean, the FHI hopes to allow us not to be intimidated and instead take informed advantage of whatever advances come our way. How does Bostrom see the potential impact of their research? “If it becomes possible for humanity to be more reflective about where we are going and clear-sighted where there may be pitfalls,” he says, “then that could be the most cost-effective thing that has ever been done.”

GCRI: Aftermath

Aftermath
Finding practical paths to recovery after a worldwide catastrophe.
by Steven Ashley
March 13, 2015

Bookmark and Share


Tony Barrett
Global Catastrophic Risk Institute

OK, we survived the cataclysm. Now what?

In recent years, warnings by top scientists and industrialists have energized research into the sort of civilization-threatening calamities that are typically the stuff of sci-fi and thriller novels: asteroid impacts, supervolcanoes, nuclear war, pandemics, bioterrorism, even the rise of a super-smart, but malevolent artificial intelligence.

But what comes afterward? What happens to the survivors? In particular, what will they eat? How will they stay warm and find electricity? How will they rebuild and recover?

These “aftermath” issues comprise some of largest points of uncertainty regarding humanity’s gravest threats. And as such they constitute some of the principal research focuses of the Global Catastrophic Risk Institute (GCRI), a nonprofit think tank that Seth Baum and Tony Barrett founded in late 2011. Baum, a New York City-based engineer and geographer, is GCRI’s executive director. Barrett, who serves as its director of research, is a senior risk analyst at ABS Consulting in Washington, DC, which performs probabilistic risk assessment and other services.

Black Swan Events

At first glance, it may sound like GCRI is making an awful lot of fuss about dramatic worst-case scenarios that are unlikely to pan out any time soon. “In any given year, there’s only a small chance that one of these disasters will occur,” Baum concedes. But the longer we wait, he notes, the greater the chance that we will experience one of these “Black Swan events” (so called because before a black swan was spotted by an explorer in the seventeenth century, it was taken for granted that these birds did not exist). “We’re trying to instil a sense of urgency in governments and society in general that these risks need to be faced now to keep the world safe,” Baum says.

GCRI’s general mission is to find ways to mobilize the world’s thinkers to identify the really big risks facing the planet, how they might cooperate for optimal effect, and the best approaches to addressing the threats. The institute has no physical base, but it serves as a virtual hub, assembling “the best empirical data and the best expert judgment,” and rolling them into risk models that can help guide our actions, Barrett says. Researchers, brought together through GCRI, often collaborate remotely. Judging the real risks posed by these low-odds, high-consequence events is no simple task, he says: “In most cases, we are dealing with extremely sparse data sets about occurrences that seldom, if ever, happened before.”


Feeding Everyone No Matter What
Following a cataclysm that blocks out the sun, what will survivors eat?
Credit: J M Gehrke

Beyond ascertaining which global catastrophes are most likely to occur, GCRI seeks to learn how multiple events might interact. For instance, could a nuclear disaster lead to a change in climate that cuts food supplies while encouraging a pandemic caused by the loss of medical resources? “To best convey these all-too-real risks to various sectors of society, it’s not enough to merely characterize them,” Baum says. Tackling such multi-faceted scenarios requires an interdisciplinary approach that would enable GCRI experts to recognize potential shared mitigation strategies that could enhance the chances of recovery, he adds.

One of the more notable GCRI projects focuses on the aftermath of calamity. This analysis was conducted by research associate Dave Denkenberger, who is an energy efficiency engineer at Ecova, an energy and utility management firm in Durango, Colorado. Together with engineer Joshua M. Pearce, of Michigan Technological University in Houghton, he looked at a key issue: If one of these catastrophes does occur, how do we feed the survivors?

Worldwide, people currently eat about 1.5 billion tons of food a year. For a book published in 2014, Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, the pair researched alternative food sources that could be ramped up within five or fewer years following a disaster that involves a significant change in climate. In particular, the discussion looks at what could be done to feed the world should the climate suffer from an abrupt, single-decade drop in temperature of about 10°C that wipes out crops regionally, reducing food supplies by 10 per cent. This phenomenon has already occurred many times in the past.

Sun Block

Even more serious are scenarios that block the sun, which could cause a 10°C temperature drop globally in only a single year or so. Such a situation could arise should smoke enter the stratosphere from a nuclear winter resulting from an atomic exchange that burns big cities, an asteroid or comet impact, or a supervolcano eruption such as what may one day occur at Yellowstone National Park.

These risks need to be faced
now to keep the world safe.
– Seth Baum

Other similar, though probably less likely, scenarios, Denkenberger says, might derive from the spread of some crop-killing organism—a highly invasive superweed, a superbacterium that displaces beneficial bacteria, a virulent pathogenic bacterium, or a super pest (an insect). Any of these might happen naturally, but they could be even more serious should they result from a coordinated terrorist attack.

“Our approach is to look across disciplines to consider every food source that’s not dependent on the sun,” Denkenberger explains. The book considers various ways of converting vegetation and fossil fuels to edible food. The simplest potential solution may be to grow mushrooms on the dead trees, “but you could do much the same by using enzymes or bacteria to partially digest the dead plant fiber and then feed it to animals,” he adds. Ruminants including cows, sheep, goats, or more likely, faster-reproducing animals like rats, chickens or beetles could do the honors.


Seth Baum
Global Catastrophic Risk Institute

A more exotic solution would be to use bacteria to digest natural gas into sugars, and then eat the bacteria. In fact, a Danish company called Unibio is making animal feed from commercially stranded methane now.

Meanwhile, the U.S. Department of Homeland Security is funding another GCRI project that assesses the risks posed by the arrival of new technologies in synthetic biology or advanced robotics which might be co-opted by terrorists or criminals for use as weapons. “We’re trying to produce forecasts that estimate when these technologies might become available to potential bad actors,” Barrett says.

Focusing on such worst-case scenarios could easily dampen the spirits of GCRI’s researchers. But far from fretting, Baum says that he came to the world of existential risk (or ‘x-risk’) from his interest in the ethics of utilitarianism, which emphasizes actions aimed at maximizing total benefit to people and other sentient beings while minimizing suffering. As an engineering grad student, Baum even had a blog on utilitarianism. “Other people on the blog pointed out how the ethical views I was promoting implied a focus on the big risks,” he recalls. “This logic checked out and I have been involved with x-risks ever since.”

Barrett takes a somewhat more jaundiced view of his chosen career: “Oh yeah, we’re lots of fun at dinner parties…”

Happy Petrov Day!

32 years ago today, Soviet army officer Stanislav Petrov refused to follow protocol and averted a nuclear war.

From 9/26 is Petrov Day:

“On September 26th, 1983, Lieutenant Colonel Stanislav Yevgrafovich Petrov was the officer on duty when the warning system reported a US missile launch. Petrov kept calm, suspecting a computer error.

Then the system reported another US missile launch.

And another, and another, and another.

What had actually happened, investigators later determined, was sunlight on high-altitude clouds aligning with the satellite view on a US missile base.

[…] The policy of the Soviet Union called for launch on warning. The Soviet Union’s land radar could not detect missiles over the horizon, and waiting for positive identification would limit the response time to minutes. Petrov’s report would be relayed to his military superiors, who would decide whether to start a nuclear war.

Petrov decided that, all else being equal, he would prefer not to destroy the world. He sent messages declaring the launch detection a false alarm, based solely on his personal belief that the US did not seem likely to start an attack using only five missiles.”

GCRI News Summaries

Here are the July and August global catastrophic risk news summaries, written by Robert de Neufville of the Global Catastrophic Risk Institute. The July summary covers the Iran deal, Russia’s new missile early warning system, dangers of AI, new Ebola cases, and more. The August summary covers the latest confrontation between North and South Korea, the world’s first low-enriched uranium storage bank, the “Islamic Declaration on Global Climate Change”, global food system vulnerabilities, and more.

Future of Life Institute Summer 2015 Newsletter

TOP DEVELOPMENTS

* $7M in AI research grants announced: We were delighted to announce the selection of 37 AI safety research teams which we plan to award a total of $7 million in funding. The grant program is funded by Elon Musk and the Open Philanthropy Project.

Max Tegmark, along with FLI grant recipients Manela Veloso and Thomas Dietterich, were interviewed on NPR’s On Point Radio for a lively discussion about our new AI safety research program.

* Open letter about autonomous weapons: FLI recently published an open letter advocating a global ban on offensive autonomous weapons development. Thousands of prominent scientists and concerned individuals are signatories, including Stephen Hawking, Elon Musk, the team at DeepMind, Yann LeCun (Director of AI Research, Facebook), Eric Horvitz (Managing Director, Microsoft Research), Noam Chomsky and Steve Wozniak.

Stuart Russell was interviewed about the letter on NPR’s All Things Considered (audio) and Al Jazeera America News(video).

* Open letter about economic impacts of AI: Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders have launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

 

EVENTS

* ITIF AI policy panel: Stuart Russell and MIRI Executive Director Nate Soares participated in a panel discussion about the risks and policy implications of AI (video here). The panel was hosted by the Information Technology & Innovation Foundation (ITIF), a Washington-based think tank focusing on the intersection of public policy & emerging technology.

* IJCAI 15: Stuart Russell presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.

* EA Global conferences: FLI co-founders Viktoriya Krakovna and Anthony Aguirre spoke at the Effective Altruism Global (EA Global) conference at Google headquarters in Mountain View, California. FLI co-founder Jaan Tallinn spoke at the EA Global Oxford conference on August 28-30.

* Stephen Hawking AMA: Professor Hawking is hosting an “Ask Me Anything” (AMA) conversation on Reddit. Users recently submitted questions here; his answers will follow in the near future.

 

OTHER UPDATES

* FLI anniversary video: FLI co-founder Meia Tegmark created an anniversary video highlighting our accomplishments from our first year.

* Future of AI FAQ: We’ve created a FAQ about the future of AI, which elaborates on the position expressed in our first open letter about AI development from January.

GCRI News Summary June 2015

Here is the June 2015 global catastrophic risk news summary, written by Robert de Neufville of the lobal Catastrophic Risk Institute. The news summaries provide overviews across the world of global catastrophic risk. This summary includes Pope Francis’s encyclical about the global environment, tensions between NATO and Russia, a joint NASA-NSA program for asteroid and comet protection, and more.

Are we heading into a second Cold War?

US-Russia tensions are at their highest since the end of the Cold War, and some analysts are warning about the growing possibility of a nuclear war. Their estimates of risk are comparable to some estimates of background risks of accidental nuclear war.

Assorted Sunday Links #3

1. In the latest issue of Joint Force Quarterly, Randy Eshelman and Douglas Derrick call for the U.S. Department of Defense to conduct research on how “to temper goal-driven, autonomous agents with ethics.” They discuss AGI and superintelligence explicitly, citing Nick Bostrom, Eliezer Yudkowsky, and others. Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command, and Derrick is an Assistant Professor of the University of Nebraska at Omaha.

2. Seth Baum’s article ‘Winter-safe Deterrence: The Risk of Nuclear Winter and Its Challenge to Deterrence‘ appears in April’s volume of Contemporary Security Policy. “[T]his paper develops the concept of winter-safe deterrence, defined as military force capable of meeting the deterrence goals of today’s nuclear weapon states without risking catastrophic nuclear winter.”

3. James Barratt, author of Our Final Invention, posts a new piece on AI risk in the Huffington Post.

4. Robert de Neufville of the Global Catastrophic Risk Institute summarizes March’s developments in the world of catastrophic risks.

5. Take part in the vote on whether we should fear AI on the Huffington Post website, where you can side with Musk and Hawking, Neil DeGrasse Tyson, or one of FLI’s very own founders Max Tegmark!

FLI launch event @ MIT

The Future of Technology: Benefits and Risks

FLI was officially launched Saturday May 24, 2014 at 7pm in MIT auditorium 10-250 – see videotranscript and photos below.

The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about what we can do now to improve the chances of reaping the benefits and avoiding the risks, moderated by Alan Alda and featuring George Church (synthetic biology), Ting Wu (personal genetics), Andrew McAfee (second machine age, economic bounty and disparity), Frank Wilczek (near-term AI and autonomous weapons) and Jaan Tallinn (long-term AI and singularity scenarios).

  • Alan Alda is an Oscar-nominated actor, writer, director, and science communicator, whose contributions range from M*A*S*H to Scientific American Frontiers.
  • George Church is a professor of genetics at Harvard Medical School, initiated the Personal Genome Project, and invented DNA array synthesizers.
  • Andrew McAfee is Associate Director of the MIT Center for Digital Business and author of the New York Times bestseller The Second Machine Age.
  • Jaan Tallinn is a founding engineer of Skype and philanthropically supports numerous research organizations aimed at reducing existential risk.
  • Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.
  • Ting Wu is a professor of Genetics at Harvard Medical School and Director of the Personal Genetics Education project.

 

Photos from the talk