Will We Use the New Nuclear Weapons?

gravity_bomb

B61-12 gravity bomb just before it penetrates the ground in a test last year. Photo courtesy National Nuclear Security Administration.

In 2015, the Pentagon successfully tested the B61-12 nuclear gravity bomb as part of a  $1 trillion effort to make the nuclear arsenal more accurate and lethal. This redesigned weapon is equipped with “dial-a-yield” technology, which allows the military to adjust the destructive force of the B61-12 before launch, for an explosive range of 0.3 to 50 kilotons of TNT.  Many government officials believe that not only does rebuilding this bomb violate the Non-Proliferation Treaty, but that the U.S. is more likely to launch this nuclear weapon at targets.

The B61 was one of the primary thermonuclear weapons that the U.S. built during the Cold War. At the time, the U.S. and European Union deployed American tactical (short-range) and strategic (long-range) nuclear weapons to counter the Soviet threat. Tactical weapons are smaller, shorter range attack missiles, which include high-caliber artillery, ground-to-ground missiles, combat support aircraft, and sea-based torpedoes, missiles, and anti-submarine weapons.

Modifications (or “mods”) of the B61 were designed to be both strategic and tactical. For example, the B61-4 is a tactical mod with a low-yield range of 0.3 to 0.5 kilotons, while the strategic B61-7 can carry yields ranging from 10 to 360 kilotons. The B61-11, the most recent of the strategic B61 mods, carries only a single yield of 400 kilotons. This weapon was designed in 1997 as a “bunker buster” — a nuclear weapon with limited earth-penetration, designed to bore meters into the ground before exploding.

The B61-12 is all of these weapons in one. The yield range of this new nuclear weapons spans that of the B61-4 up to the low end of the B61-7. And while the B61-12 won’t be as powerful as the B61-11, it will feature comparable bunker-busting capabilities, with greatly increased accuracy. Boeing developed four maneuverable fins for the new gravity bomb that will work with the new electronics system to zero in on targets – even those deep underground, such as tunnels and weapons bunkers.

The image of a nuclear explosion that springs to mind most often is either of the bombs dropped on Japan or the massive, 50 megaton Tsar Bomba that the Soviets tested in the 60s. The bombs dropped on Hiroshima and Nagasaki were “only” 15 and 20 kilotons, respectively, and they killed over 250,000 people. The B61-12 is a completely different beast.

At 0.3 kilotons, the smallest yield for the B61-12 is 50 times smaller than the bomb dropped on Hiroshima, while the maximum yield of 50 kilotons is over twice as large. This range of potential and accurate devastation is unlike anything we’ve seen. As the Director of the Nuclear Information Project at the Federation of American Scientists, Hans Kristensen, notes in the National Interest,  “We do not have a nuclear-guided bomb in our arsenal today… It [the B61-12] is a new weapon.”

According to a quote by scholar Robert C. Aldridge in the same National Interest article, “Making a weapon twice as accurate has the same effect on lethality as making the warhead eight times as powerful. Phrased another way, making the missile twice as precise would only require one-eighth the explosive power to maintain the same lethality.”

This is not your grandparent’s nuclear bomb.

Of course, these new modifications won’t happen for free. The B61-12 is first of the five new nuclear warheads the government plans to build over the next three decades, at a total estimated cost (including delivery systems) of $1 trillion dollars. Not only is this a lot of money, but the government justifies these smaller weapons as both safer and useable.

According to Zachary Keck from the National Interest, “This combination of accuracy and low-yield make the B61-12 the most usable nuclear bomb in America’s arsenal.” Nuclear attack simulations show that if the U.S. were to counterstrike against China’s ICBM silos using a high-yield weapon, 3-4 million people could be killed. However with a low-yield nuclear weapon, this death toll could drop to as little as 700.* With casualties this low, using a nuclear weapon has become thinkable for the first time since the 1940s.

The government has scheduled the production of 4-500 B61-12s over the next 20 years. However, production has already been postponed once from 2017 to 2020 causing the price to double not once, but twice from $2 million for each bomb to $4 million and again to $8 million. Further delays are anticipated and the costs are expected to increase again to $10 million. According to Hans Kristensen and Robert Norris with the Bulletin of Atomic Scientists, “The weapon’s overall price tag is expected to exceed $10 billion, with each B61-12 estimated to cost more than the value of its weight in gold.”

In 2009, President Obama pledged a “nuclear-free world” in Prague and was awarded the Peace Prize by the Nobel committee. Though the nuclear stockpile has been reduced, rebuilding this warhead to be the first self-guided weapon makes the B61-12 a new addition nuclear arsenal.

According an article from the New York Times, James N. Miller, who helped establish this plan before leaving his post as under secretary of defense for policy in 2014, using this accurate weapon is a step in the right direction when it comes to increasing accuracy and deterrence. “Though not everyone agrees, I think it’s the right way to proceed,” Mr. Miller said. “Minimizing civilian casualties near foreign military targets.”  General James E. Cartwright, also quoted in the Times article, agreed these mini-nuclear weapons are useful upgrades, but ‘“what going smaller does,” he acknowledged, “is to make the weapon more thinkable.”’

Retired veteran Ellen O. Tauscher, a former under secretary of state for arms control who was also quoted in the Times article, disagreed: “I think there’s a universal sense of frustration. Somebody has to get serious. We’re spending billions of dollars on a status quo that doesn’t make us any safer.”

 

*Editor’s note: It is unclear whether cutting casualties from millions to thousands would greatly reduce the adversary’s desire to counterattack, given historical reactions to thousands killed at Pearl Harbor or September 11.

 

 

The Trillion Dollar Question Obama Left Unanswered in Hiroshima

A soldier carries the briefcase containing nuclear weapons codes for U.S. President Barack Obama. REUTERS/Joshua Roberts

This following article, written by Max Tegmark and Frank Wilczek, was originally posted in The Conversation.

As it seeks to modernize its nuclear arsenal, the United States faces a big choice, one which Barack Obama failed to mention during his moving Hiroshima speech on May 27.

Should we spend a trillion dollars to replace each of our thousands of nuclear warheads with a more sophisticated substitute attached to a more lethal delivery system? Or should we keep only enough nuclear weapons needed for a devastatingly effective deterrence against any nuclear aggressor, investing the money saved into other means of making our nation more secure? The first option would allow us to initiate and wage nuclear war. The second would allow us to deter it. These are very different tasks.

As physicists who have studied nuclear reactions and cataclysmic explosions, we are acutely aware that nuclear weapons are so devastating that merely a hundred could annihilate the major population centers of any potential state enemy. That prospect is enough to deter any rational leadership while no number of weapons could deter a mad one. Waging nuclear warfare could involve using vastly more warheads to strike diverse military and industrial targets.

So, is maintaining the ability to initiate nuclear war worth a trillion dollar investment?

The limits of nuclear blackmail

The U.S. and Russia currently have about 7,000 nukes each, largely for historical reasons. That’s over 13 times as many as held by the other seven nuclear powers combined. When the Soviet Union was perceived to be a threat to Europe with its numerically superior conventional forces, the U.S. stood ready to use nuclear weapons in response. We were prepared not only to deter the use of nuclear weapons by others, but also possibly to initiate nuclear warfare, and to use nuclear weapons in battle.

Now the tables have turned and NATO is the dominant nonnuclear force in Europe. But other arguments for maintaining the ability to initiate nuclear war remain, positing the utility of “compellance” (also known as “nuclear blackmail”) or using the threat of nuclear attack to extract concessions. This strategy has been used on several occasions. For example, when President Eisenhower threatened the use of nuclear weapons to compel negotiations ending the Korean War.

In today’s world, with nuclear technology more widely accessible, compellance is no longer straightforward. If a nonnuclear nation feels it is subject to nuclear bullying, it can counter by developing its own nuclear deterrent, or enlisting nuclear allies. For example, U.S. nuclear threats inspired North Korea to mount its own nuclear program, which is, to say the least, not the result we were hoping for.

North Korean leader Kim Jong-Un looks at a rocket warhead tip after a simulated test of atmospheric reentry of a ballistic missile. Such missiles are often used to deliver nuclear weapons. North Korea’s Korean Central News Agency via REUTERS

Another development is the emergence of modern threats to the U.S. and its allies against which nuclear compellance is rather useless. For example, nuclear weapons didn’t help prevent 9/11. Nor did they help the U.S. in Iraq, Afghanistan, Syria or Libya – or in the battle against terrorist groups such as Al-Quaida or the Islamic State.

These considerations raise the question of whether we might actually improve our national security by forswearing compellance and committing to “No First Use.” That is, committing to using nuclear weapons only in response to their use by others. This deterrence-only approach is already the policy of two other major nuclear powers, China and India. It is a mission we could fulfill with a much smaller and cheaper arsenal, freeing up money for other investments in our national security. By easing fear of our intentions, this could also reduce further nuclear proliferation – so far, eight other nations have developed nukes after we bombed Hiroshima, and all except Russia have concluded that deterrence requires fewer than a few hundred nuclear weapons. Indeed, hundreds of warheads may be a more convincing deterrent than thousands, because use of the latter might be an act of self destruction, triggering a decade-long global nuclear winter that would kill most Americans even if no nuclear explosions occurred on U.S. soil.

‘No First Use’ or ‘Pay to Play’?

Whatever one’s opinion on No First Use, it is a question with huge implications for military spending. Were the U.S. to pledge No First Use, we would have no reason to deploy more nuclear weapons than required for deterrence. We could save ourselves four million dollars per hour for the next 30 years, according to government estimates.

Nuclear weapons involve many complex issues. But one crucial question is beautifully simple: is our aim strictly to deter nuclear war, or should we invest the additional resources needed to maintain our ability to initiate it? No First Use, or Pay to Play?

We urge debate moderators, town hall participants and anyone else who gets the opportunity to ask our presidential candidates this crucial question. American voters deserve to know where their candidates stand.

What Google’s TPUs Mean for AI Timing and Safety

The following article was written by Jim Babcock and originally posted on Concept Space Cartography.

Last Wednesday, Google announced that AlphaGo was not powered by GPUs as everyone thought, but by Google’s own custom ASICs (application-specific integrated circuits), which they are calling “tensor processing units” (TPUs).

We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning.

TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation.

So, what does this mean for AGI timelines, and how does the existence of TPUs affect the outcome when AGI does come into existence?

The development of TPUs accelerated the timeline of AGI development. This is fairly straightforward; researchers can do more experiments with more computing power, and algorithms that stretched past the limits of available computing power before became possible.

If your estimate of when there will be human-comparable or superintelligent AGI was based on the very high rate of progress in the past year, then this should make you expect AGI to arrive later, because it explains some of that progress with a one-time gain that can’t be repeated. If your timeline estimate was based on extrapolating Moore’s Law or the rate of progress excluding the past year, then this should make you expect AGI to arrive sooner.

Some people model AGI development as a race between capability and control, and want us to know more about how to control AGIs before they’re created. Under a model of differential technological development, the creation of TPUs could be bad if it accelerates progress in AI capability more than it accelerates progress in AI safety. I have mixed feelings about differential technological development as applied to AGI; while the safety/control research has a long way to go, humanity faces a lot of serious problems which AGI could solve. In this particular case, however, I think the differential technological advancement model is wrong in an interesting way.

Take the perspective of a few years ago, before Google invested in developing ASICs. Switching from GPUs to more specialized processors looks pretty inevitable; it’s a question of when it will happen, not whether it will. Whenever the transition happens, it creates a discontinuous jump in capability; Google’s announcement calls it “roughly equivalent to fast-forwarding technology about seven years into the future”. This is slight hyperbole, but if you take it at face value, it raises an interesting question: which seven years do you want to fast-forward over? Suppose the transition were delayed for a very long time, until AGI of near-human or greater-than-human intelligence was created or was about to be created. Under those circumstances, introducing specialized processors into the mix would be much riskier than it is now. A discontinuous increase in computational power could mean that AGI capability skips discontinuously over the region that contains the best opportunities to study an AGI and work on its safety.

In diagram form:

asics-or-no

I don’t know whether this is what Google was thinking when they decided to invest in TPUs. (It probably wasn’t; gaining a competitive advantage is reason enough). But it does seem extremely important.

There are a few smaller strategic considerations that also point in the direction of TPUs being a good idea. GPU drivers are extremely complicated, and rumor has it that the code bases of both major GPU
manufacturers are quite messy; starting from scratch in a context that doesn’t have to deal with games and legacy code can greatly improve reliability. When AGIs first come into existence, if they run on specialized hardware then the developers won’t be able to increase its power as rapidly by renting more computers because availability of the specialized hardware will be more limited. Similarly, an AGI acting autonomously won’t be able to increase its power that way either. Datacenters full of AI-specific chips make monitoring easier by concentrating AI development into predictable locations.

Overall, I’d say Google’s TPUs are a very positive development from a safety standpoint.

Of course, there’s still the question of how the heck they actually are, beyond the fact that they’re specialized processors that train neural nets quickly. In all likelihood, many of the gains come from tricks they haven’t talked about publicly, but we can make some reasonable inferences from what they have said.

Training a neural net involves doing a lot of arithmetic with a very regular structure, like multiplying large matrices and tensors together. Algorithms for training neural nets parallelize extremely well; if you double the amount of processors working on a neural net, you can finish the same task in half the time, or make your neural net bigger. Prior to 2008 or so, machine learning was mostly done on general-purpose CPUs — ie, Intel and AMD’s x86 and x86_64 chips. Around 2008, GPUs started becoming less specific to graphics and more general purpose, and today nearly all machine learning is done with “general-purpose GPU” (GPGPU). GPUs can perform operations like tensor multiplication more than an order of magnitude faster. Why’s that? Here’s a picture of an AMD Bulldozer CPU which illustrates the problem CPUs have. This is a four-core x86_64 CPU from late 2011.

AMDBulldozerHighlightedFPU

(Image source)
Highlighted in red, I’ve marked the floating point unit, which is the only part of the CPU that’s doing actual arithmetic when you use it to train a neural net. It is very small. This is typical of modern CPU architectures; the vast majority of the silicon and the power is spent dealing with control flow, instruction decoding and scheduling, and the memory hierarchy. If we could somehow get rid of that overhead, we could fill the whole chip with floating-point units.

This is exactly what a GPU is. GPUs only work on computations with highly regular structure; they can’t handle branches or other control flow, they have comparatively simple instruction sets (and hide that instruction set behind a driver so it doesn’t have to be backwards compatibility), and they have predictable memory-access patterns to reduce the need for cache. They spend most of their energy and chip-area on arithmetic units that take in very wide vectors of numbers, and operate on all of them at once.

But GPUs still retain a lot of computational flexibility that training a neural net doesn’t need. In particular, they work on numbers with varying numbers of digits, which requires duplicating a lot of the arithmetic circuitry. While Google has published very little about their TPUs, one thing they did mention is reduced computational precision.

As a point of comparison, take Nvidia’s most recent GPU architecture, Pascal.

Each SM [streaming multiprocessor] in GP100 features 32 double precision (FP64) CUDA Cores, which is one-half the number of FP32 single precision CUDA Cores.

Using FP16 computation improves performance up to 2x compared to FP32 arithmetic, and similarly FP16 data transfers take less time than FP32 or FP64 transfers.

Format Bits of
exponent
Bits of
precision
FP16 5 10
FP32 9 23
FP64 11 52

So a significant fraction of Nvidia’s GPUs are FP64 cores, which are useless for deep learning. When it does FP16 operations, it uses an FP32 core in a special mode, which is almost certainly less efficient than using two purpose-built FP16 cores. A TPU can also omit hardware for unused operations like trigonometric functions and probably, for that matter, division. Does this add up to a full order of magnitude? I’m not really sure. But I’d love to see Google publish more details of their TPUs, so that the whole AI research community can make the same switch.

Biodiversity Loss: An Existential Risk Comparable to Climate Change

Photo courtesy Audrey DeRose-Wilson

Piping Plovers are one of many endangered bird species in North America. Photo courtesy Audrey DeRose-Wilson

The following article was originally posted in the Bulletin of Atomic Scientists.

According to the Bulletin of Atomic Scientists, the two greatest existential threats to human civilization stem from climate change and nuclear weapons. Both pose clear and present dangers to the perpetuation of our species, and the increasingly dire climate situation and nuclear arsenal modernizations in the United States and Russia were the most significant reasons why the Bulletin decided to keep the Doomsday Clock set at three minutes before midnight earlier this year.

But there is another existential threat that the Bulletin overlooked in its Doomsday Clock announcement: biodiversity loss. This phenomenon is often identified as one of the many consequences of climate change, and this is of course correct. But biodiversity loss is also a contributing factor behind climate change. For example, deforestation in the Amazon rainforest and elsewhere reduces the amount of carbon dioxide removed from the atmosphere by plants, a natural process that mitigates the effects of climate change. So the causal relation between climate change and biodiversity loss is bidirectional.

Furthermore, there are myriad phenomena that are driving biodiversity loss in addition to climate change. Other causes include ecosystem fragmentation, invasive species, pollution, oxygen depletion caused by fertilizers running off into ponds and streams, overfishing, human overpopulation, and overconsumption. All of these phenomena have a direct impact on the health of the biosphere, and all would conceivably persist even if the problem of climate change were somehow immediately solved.

Such considerations warrant decoupling biodiversity loss from climate change, because the former has been consistently subsumed by the latter as a mere effect. Biodiversity loss is a distinct environmental crisis with its own unique syndrome of causes, consequences, and solutions—such as restoring habitats, creating protected areas (“biodiversity parks”), and practicing sustainable agriculture.

Deforestation of the Amazon rainforest decreases natural mitigation of CO2.

Deforestation of the Amazon rainforest decreases natural mitigation of CO2 and destroys the habitats of many endangered species.

The sixth extinction.

The repercussions of biodiversity loss are potentially as severe as those anticipated from climate change, or even a nuclear conflict. For example, according to a 2015 study published in Science Advances, the best available evidence reveals “an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way.” This conclusion holds, even on the most optimistic assumptions about the background rate of species losses and the current rate of vertebrate extinctions. The group classified as “vertebrates” includes mammals, birds, reptiles, fish, and all other creatures with a backbone.

The article argues that, using its conservative figures, the average loss of vertebrate species was 100 times higher in the past century relative to the background rate of extinction. (Other scientists have suggested that the current extinction rate could be as much as 10,000 times higher than normal.) As the authors write, “The evidence is incontrovertible that recent extinction rates are unprecedented in human history and highly unusual in Earth’s history.” Perhaps the term “Big Six” should enter the popular lexicon—to add the current extinction to the previous “Big Five,” the last of which wiped out the dinosaurs 66 million years ago.

But the concept of biodiversity encompasses more than just the total number of species on the planet. It also refers to the size of different populations of species. With respect to this phenomenon, multiple studies have confirmed that wild populations around the world are dwindling and disappearing at an alarming rate. For example, the 2010 Global Biodiversity Outlook report found that the population of wild vertebrates living in the tropics dropped by 59 percent between 1970 and 2006.

The report also found that the population of farmland birds in Europe has dropped by 50 percent since 1980; bird populations in the grasslands of North America declined by almost 40 percent between 1968 and 2003; and the population of birds in North American arid lands has fallen by almost 30 percent since the 1960s. Similarly, 42 percent of all amphibian species (a type of vertebrate that is sometimes called an “ecological indicator”) are undergoing population declines, and 23 percent of all plant species “are estimated to be threatened with extinction.” Other studies have found that some 20 percent of all reptile species, 48 percent of the world’s primates, and 50 percent of freshwater turtles are threatened. Underwater, about 10 percent of all coral reefs are now dead, and another 60 percent are in danger of dying.

Consistent with these data, the 2014 Living Planet Report shows that the global population of wild vertebrates dropped by 52 percent in only four decades—from 1970 to 2010. While biologists often avoid projecting historical trends into the future because of the complexity of ecological systems, it’s tempting to extrapolate this figure to, say, the year 2050, which is four decades from 2010. As it happens, a 2006 study published in Science does precisely this: It projects past trends of marine biodiversity loss into the 21st century, concluding that, unless significant changes are made to patterns of human activity, there will be virtually no more wild-caught seafood by 2048.

48% of the world's primates are threatened with extinction.

48% of the world’s primates are threatened with extinction.

Catastrophic consequences for civilization.

The consequences of this rapid pruning of the evolutionary tree of life extend beyond the obvious. There could be surprising effects of biodiversity loss that scientists are unable to fully anticipate in advance. For example, prior research has shown that localized ecosystems can undergo abrupt and irreversible shifts when they reach a tipping point. According to a 2012 paper published in Nature, there are reasons for thinking that we may be approaching a tipping point of this sort in the global ecosystem, beyond which the consequences could be catastrophic for civilization.

As the authors write, a planetary-scale transition could precipitate “substantial losses of ecosystem services required to sustain the human population.” An ecosystem service is any ecological process that benefits humanity, such as food production and crop pollination. If the global ecosystem were to cross a tipping point and substantial ecosystem services were lost, the results could be “widespread social unrest, economic instability, and loss of human life.” According to Missouri Botanical Garden ecologist Adam Smith, one of the paper’s co-authors, this could occur in a matter of decades—far more quickly than most of the expected consequences of climate change, yet equally destructive.

Biodiversity loss is a “threat multiplier” that, by pushing societies to the brink of collapse, will exacerbate existing conflicts and introduce entirely new struggles between state and non-state actors. Indeed, it could even fuel the rise of terrorism. (After all, climate change has been linked to the emergence of ISIS in Syria, and multiple high-ranking US officials, such as former US Defense Secretary Chuck Hagel and CIA director John Brennan, have affirmed that climate change and terrorism are connected.)

The reality is that we are entering the sixth mass extinction in the 3.8-billion-year history of life on Earth, and the impact of this event could be felt by civilization “in as little as three human lifetimes,” as the aforementioned 2012 Nature paper notes. Furthermore, the widespread decline of biological populations could plausibly initiate a dramatic transformation of the global ecosystem on an even faster timescale: perhaps a single human lifetime.

The unavoidable conclusion is that biodiversity loss constitutes an existential threat in its own right. As such, it ought to be considered alongside climate change and nuclear weapons as one of the most significant contemporary risks to human prosperity and survival.

overfisihing_bluefin_tuna

Overfishing has left Bluefin Tuna an endangered species.

MIRI May 2016 Newsletter

Research updates

General updates

News and links

This newsletter was originally posted here.

CRISPR, Gene Drive Technology, and Hope for the Future

The following article was written by John Min and George Church.

Imagine for a moment, a world where we are able to perform genetic engineering on such large scales as to effectively engineer nature.  In this world, parasites that only cause misery and suffering would not exist, only minimal pesticides and herbicides would be necessary in agriculture, and the environment would be better adapted to maximize positive interactions with all human activities while maintaining sustainability.  While this may all sound like science fiction, the technology that might allow us to reach this utopia is very real, and if we develop it responsibly, this dream may well become reality.

‘Gene drive’ technology, or more specifically, CRISPR gene drives, have been heralded by the press as a potential solution for mosquito-borne diseases such as malaria, dengue, and most recently, Zika. In general, gene drive is a technology that allows scientists to bias the rate of inheritance of specific genes in wild populations of organisms. A gene is said to ‘drive’ when it is able to increase the frequency of its own inheritance higher than the expected probability of 50%. In doing so, gene drive systems exhibit unprecedented ability to directly manipulate genes on a population-wide scale in nature.

The idea to use gene drive systems to propagate engineered genes in natural systems is not new.  Indeed, a proposal to construct gene drives using naturally occurring homing nucleases, genes that can specifically cut DNA and insert extra copies of itself, was published by Austin Burt in 2003 (Burt, 2013). In fact, the concept was discussed even before the earliest studies on naturally driving genetic elements — such as transposons, which are small sections of DNA that can insert extra copies of itself — over half a century ago (Serebrovskii, 1940) (Vanderplank, 1944).

However, it is only with advances in modern genome editing technology, such as CRISPR, that scientists are finally able to digitally target gene drives to any desired location in the genome. Ever since the first CRISPR gene drive design was described in a 2014 publication by Kevin Esvelt and George Church (Esvelt, et al., 2014), man-made gene drive systems have been successfully tested in three separate species, yeast, fruit fly, and mosquitoes (DiCarlo, et al., 2015) (Gantz & Bier, 2015) (Gantz, et al., 2015) .

The term ‘CRISPR’ stands for clustered regularly-interspaced short palindromic repeats and describes an adaptive immune system against viral infections originally discovered in bacteria.  Nucleases, or proteins that cut DNA, in the CRISPR family are generally able to cut DNA anywhere as specified by a short stretch of RNA sequence at high precision and accuracy.

The nuclease cas9, in particular, has become a favorite among geneticists around the world since the publication of a series of high impact journal articles in late 2012 and early 2013 (Jinek, et al., 2012) (Cong, et al., 2013) (Hwang, et al., 2013). Using cas9, scientists are able to create ‘double-stranded breaks,’ or cuts in DNA, at nearly any location specified by a 20 nucleotide piece of RNA sequence.

After being cut, we can take advantage of natural DNA repair mechanisms to persuade cells to incorporate new genetic information into the break. This allows us to introduce new genes into an organism or even bar-code it at a genetic level. By using CRISPR technology, scientists are also able to insert synthesized gene drive systems into a host organism’s genome with the same high level of precision and reliability.

Potential applications for CRISPR gene drives are broad and numerous, as the technology is expected to work in any organism that reproduces sexually.

While popular media attention is chiefly focused on the elimination of mosquito-borne diseases, applications also exist in the fight against the rise of Lyme disease in the U.S. Beyond public health, gene drives can be used to eliminate invasive species from non-native habitats, such as mosquitos in Hawaii. In this case, many native Hawaiian bird species, especially the many honeycreepers, are being driven to extinction by mosquito-borne avian malaria. The removal of mosquitos in Hawaii would both save the  bird populations, as well as make Hawaii even more attractive as a tropical paradise for tourists.

With such rapid expansion of gene drive technology over the past year, it is only natural for there to be some concern and fear over attempting to genetically engineer nature at such a large scale. The only way to truly address these fears is to rigorously test the spreading properties of various gene drive designs within the safety of the laboratory — something that has also been in active development over the last year.

It is also important to remember that mankind has been actively engineering the world around us since the dawn of civilization, albeit with more primitive tools. Using a mixture of breeding and mechanical tools, we have managed to transform teosinte into modern corn, created countless breeds of dogs and cats, and transformed vast stretches everything from lush forests to deserts into modern farmland.

Yet, these amazing feats are not without consequence. Most products of our breeding techniques are unable to survive independently in nature, and countless species have become extinct as the result of our agricultural expansion and eco-engineering.

It is imperative that we approach gene drives differently, with increased consideration for the consequences of our actions on both the natural world as well as ourselves. Proponents of gene drive technology would like to initiate a new research paradigm centered on collective decision making. As most members of the public will inevitably be affected by a gene drive release, it is only ethical to include the public throughout the research and decision making process of gene drive development.  Furthermore, by being transparent and inviting of public criticism, researchers are able to crowd-source the “de-bugging” process, as well as minimize the risk of a gene drive release going awry.

We must come to terms with the reality that thousands of acres of habitat continue to be destroyed annually through a combination of chemical sprays, urban and agricultural expansion, and the introduction of invasive species, just to name a few. To improve up on this, I would like to echo the hopes of my mentor, Kevin Esvelt, toward the use of “more science, and fewer bulldozers for environmental engineering” in hopes of creating a more sustainable co-existence between man and nature. The recent advancements in CRISPR gene drive technology represent an important step toward this hopeful future.

 

About the author: John Min is a PhD. Candidate in the BBS program at Harvard Medical School co-advised by Professor George Church and Professor Kevin Esvelt at MIT Media Labs.  He is currently working on creating a laboratory model for gene drive research.

 

References

Burt, A. (2013). Site-specific selfish genes as tools for the control and genetic engineering of naturl populations. Proceedings of the biological sciences B, 270:921-928.

Cong, L., Ann Ran, F., Cox, D., Lin, S., Barretto, R., Habib, N., . . . Zhang, F. (2013). Multiplex Genome Engineering Using CRISPR/Cas Systems. Science, 819-823.

DiCarlo, J. E., Chavez, A., Dietz, S. L., Esvelt, K. M., & Church, G. M. (2015). RNA-guided gene drives can efficiently and reversibly bias inheritance in wild yeast. bioRxiv preprint, DOI:10.1101/013896.

Esvelt, K. M., Smidler, A. L., Catteruccia, F., & Church, G. M. (2014). Concerning RNA-guided gene drives for the alteration of wild populations. eLIFE, 1-21.

Gantz, V. M., & Bier, E. (2015). The mutagenic chain reaction: A method for converting heterozygous to homozygous mutations. Science, Vol. 348 442-444.

Gantz, V., Jasinskiene, N., Tatarenkova, O., fazekas, A., Macias, V. M., Bier, E., & James, A. A. (2015). Highly efficient Cas90mediated gene drive for population modification of the malaria vector mosquito Anopheles stephensi. PNAS, vol.112 49.

Hwang, W. Y., Fu, Y., Reyon, D., Maeder, M. L., Tsai, S. Q., Sander, J. D., . . . Joung, J. (2013). Efficient genome editing in zebrafish using a CRISPR-Cas system. Nature Biotechnology, 227-229.

Jinek, M., Chylinski, K., Fonfara, I., Hauer, M., Doudna, J. A., & Charpentier, E. (2012). A Programmable Dual-RNA-Guided DNA Endonuclease in Adaptive Bacterial Immunity. Science, 816-821.

Serebrovskii, A. (1940). On the possibility of a new method for the control of insect pests. Zool.Zh.

Vanderplank, F. (1944). Experiments in crossbreeding tsetse flies, Gossina species. Nature, vol.144 607-608.

 

 

Nuclear Weapons Are Scary — But We Can Do Something About Them

We’re ending our Huffington Post nuclear security series on a high note, with this article by Susi Snyder, explaining how people can take real action to decrease the threat of nuclear weapons.

Nuclear weapons are scary. The risk of use by accident, intention or terror. The climate consequences. The fact that they are designed and built to vaporize thousands of people with the push of a button. Scary. Fortunately, there is something we can do.

We know that nuclear weapons are scary, but we must be much louder in defining them as unacceptable, as illegitimate. By following the money, we can cut it off, and while this isn’t the only thing necessary to make nuclear weapons extinct, it will help.

That’s why we made Don’t Bank on the Bomb. Because we want to do something about nuclear weapons. Investments are not neutral. Financing and investing are active choices, based on a clear assessment of a company and its plans. Any financial service delivered to a company by a financial institution or other investor gives a tacit approval of their activities. To make nuclear weapons, you need money. Governments pay for a lot of things, but the companies most heavily involved in producing key components for nuclear warheads need additional investment — from banks, pension funds, and insurance companies — to sustain the working capital they need to maintain and modernize nuclear bombs.

We can steer these companies in a new direction. We can influence their decision making, by making sure our own investments don’t go anywhere near nuclear weapon producing companies. Choosing to avoid investment in controversial items or the companies that make them — from tobacco to nuclear arms — can result in changed policies and reduces the chances of humanitarian harm. Just as it wasn’t smokers that got smoking banned indoors across the planet, it’s not likely that the nuclear armed countries will show the normative leadership necessary to cut off the flow of money to their nuclear bomb producers.

Public exclusions by investors have a stigmatizing effect on companies associated with illegitimate activities. There are lots of examples from child labor to tobacco where financial pressure had a profound impact on industry. While it is unlikely that divestment by a single financial institution or government would enough for a company to cancel its nuclear weapons associated contracts, divestment by even a few institutions, or countries, for the same reason can affect a company’s strategic direction.

It’s worked before.

Divestment, and legal imperatives to divest are powerful tools to compel change. The divestment efforts in the 1980s around South Africa are often cited as having a profound impact on ending the Apartheid Regime. Global efforts divesting from tobacco stocks, have not ended the production or sale of tobacco products, but have compelled the producing companies to significantly modify behaviors — and they’ve helped to delegitimize smoking.

According to a 2013 report by Oxford University “in almost every divestment campaign … from adult services to Darfur, tobacco to Apartheid, divestment campaigns were effective in lobbying for restricting legislation affecting stigmatized firms.” The current global fossil fuel divestment campaign is mobilizing at all levels of society to stigmatize relationships with the fossil fuel industry resulting in divestment by institutions representing over $3.4 trillion in assets, and inspiring investment towards sustainable energy solutions.

US company Lockheed Martin, which describes itself as the worlds largest arms manufacturer, announced it ceased its involvement with the production of rockets, missiles or other delivery systems for cluster munitions and stated it will not accept such orders in the future. The arms manufacturer expressed the hope that its decision to cease the activities in the area of cluster munitions would enable it to be included in investors portfolios again, thereby suggesting that pressure by financial institutions had something to do with its decision.

In Geneva right now, governments are meeting to discuss new legal measures to deal with the deadliest weapons. The majority of governments want action- and want it now. Discussions are taking place about negotiating new legal instruments — new international law about nuclear weapons. The majority of the world’s governments are calling for a comprehensive new treaty to outlaw nuclear weapons.

And they’re talking about divestment too. For example, the Ambassador from Jamaica said:

“A legally-binding instrument on prohibition of nuclear weapons would also serve as a catalyst for the elimination of such weapons. Indeed, it would encourage nuclear weapon states and nuclear umbrella states to stop relying on these types of weapons of mass destruction for their perceived security. Another notable impact of a global prohibition is that it would encourage financial institutions to divest their holdings in nuclear weapons companies.”

Governments are talking about divestment, and it’s something you can do too.

If you have a bank account, find out if your bank invests in nuclear weapon producing companies. You can either look at our website and see if your bank is listed, or you can ask your bank directly. We found that a few people, asking the same bank about questionable investments, was enough to get that bank to adopt a policy preventing them from having any relationship with nuclear weapon producing companies.

Anyone, no matter where they are can have some influence over nuclear weapons decision making. From the heads of government to you from your very own pocket — everyone can do something about this issue. It doesn’t take a lot of time, or money, to make a difference, but it does take you. Together we can stop the scary threat of massive nuclear violence. If you want to help end the threat of nuclear weapons, then put your money where your mouth is, and Don’t Bank on the Bomb.

A Call for Russia and the U.S. to Cooperate in Protecting Against Nuclear Terrorism

The following post was written by Former Secretary of Defense William J. Perry and California Governor Jerry Brown as part of our Huffington Post series on nuclear security.

We believe that the likelihood of a nuclear catastrophe is greater today than it was during the Cold War. In the Cold War our nation lived with the danger of a nuclear war starting by accident or by miscalculation. Indeed, the U.S. had three false alarms during that period, any one of which might have resulted in a nuclear war, and several crises, including the Cuban Missile Crisis, which could have resulted in a nuclear war from a miscalculation on either side.

When the Cold War ended, these dangers receded, but with the increasing hostility between the U.S. and Russia today, they are returning, endangering both of our countries. In addition to those old dangers, two new dangers have arisen—nuclear terrorism, and the possibility of a regional nuclear war. Neither of those dangers existed during the Cold War, but both of them are very real today. In particular, the prospect of a nuclear terror attack looms over our major cities today.

Both Al Qaeda and ISIL have tried to acquire nuclear weapons, and no one should doubt that if they succeeded they would use them. Because the security around nuclear weapons is so high, it is unlikely (but not impossible) that they could buy or steal a nuclear bomb. But if they could obtain some tens of kilograms of highly enriched uranium (HEU), they could make their own improvised nuclear bomb. A significant quantity of HEU is held by civilian organizations, with substantially lower security than in military facilities. Recognizing this danger, President Obama initiated the Nuclear Security Summit meetings, whose objective was to eliminate fissile material not needed, and to provide enhanced security for the remainder.

That program—involving the leaders of over 50 nations that possessed fissile material, has been remarkably successful. In 1992, 52 countries had weapons-usable nuclear material; in 2010, the year of the first Summit, that number stood at 35. Just six years later, we are down to 24, as 11 more countries have eliminated their stocks of highly enriched uranium and plutonium. Additionally, security has been somewhat improved for the remaining material. But progress has stalled, much more remains to be done, and the danger of a terror group obtaining fissile material is still unacceptably high.

A quantity of HEU the size of a basketball would be sufficient to make an improvised nuclear bomb that had the explosive power of the Hiroshima bomb and was small enough to fit into a delivery van. Such a bomb, delivered by van (or fishing boat) and detonated in one of our cities, could essentially destroy that city, causing hundreds of thousands of casualties, as well as major social, political, and economic disruptions.

The danger of this threat is increasing every day; indeed, we believe that our population is living on borrowed time. If this catastrophe were allowed to happen, our society would never be the same. Our political system would respond with frenzied actions to ensure that it would not happen again, and we can assume that, in the panic and fear that would ensue, some of those actions would be profoundly unwise. How much better if we took preventive measures now—measures that increase our safety while still preserving our democracy and our way of life.

Two actions cry out to be taken. One is the international effort to improve the security of fissile material. The Nuclear Security Summits have made a very good start in that direction, but they are now over, and the pressure to reduce supplies of fissile material and improve security for the remainder predictably will falter. It is imperative to keep up this pressure, either through continuing summits, or through an institutional process that would be created by the nations that attended the summits and that would be managed by the Disarmament Agency of the UN, which would be given additional powers for that purpose. The U.S. should take the lead to ensure that a robust follow-on program is established.

Beyond that, and perhaps even more importantly, the U.S. and Russia, the nations that possess 90 percent of the world’s fissile material, should work closely together, including cooperation in intelligence about terror groups, to ensure that a terror group never obtains enough material to destroy one of their cities. After all, these two nations not only possess most of the fissile material, they are also the prime targets for a terror attack. Moscow and St. Petersburg are in as great a danger as Washington, D.C. and New York City.

Sen. Sam Nunn has proposed that Russia and the U.S. form a bilateral working group specifically charged with outlining concrete actions they could take that would greatly lessen the danger of Al Qaeda or ISIL obtaining enough fissile material to make improvised nuclear bombs. Whatever disagreements exist between our two countries—and they are real and serious—certainly we could agree to work together to protect our cities from destruction.

If our two countries were successful in cooperating in this important area, they might be encouraged to cooperate in other areas of mutual interest, and, in time, even begin to work to resolve other differences. The security of the whole world would be improved if they could do so.

Even with these efforts, we cannot be certain that a terror group could not obtain fissile material. But we can greatly lower that probability by taking responsible actions to protect our societies. If a nuclear bomb were to go off in one of our cities, we would move promptly to take actions that could prevent another attack. So why not do it now? Timely action can prevent the catastrophe from occurring, and can ensure that the preventive actions we take are thoughtful and do not make unnecessary infringements on our civil liberties.

What President Obama Should Say When He Goes to Hiroshima

The following post was written by David Wright and Lisbeth Gronlund as part of our Huffington Post series on nuclear security. Gronlund and Wright are both Senior Scientists and Co-Directors of the Global Security Program for the Union of Concerned Scientists.

Yesterday the White House announced that President Obama will visit Hiroshima — the first sitting president to do so — when he is in Japan later this month.

He will give a speech at the Hiroshima Peace Memorial Park, which commemorates the atomic bombing by the United States on August 6, 1945.

According to the president’s advisor Ben Rhodes, Obama’s remarks “will reaffirm America’s longstanding commitment — and the President’s personal commitment — to pursue the peace and security of a world without nuclear weapons. As the President has said, the United States has a special responsibility to continue to lead in pursuit of that objective as we are the only nation to have used a nuclear weapon.”

Obama gave his first foreign policy speech in Prague in April 2009, where he talked passionately about ending the threat posed by nuclear weapons. He committed the United States to reducing the role of nuclear weapons in its national security policy and putting an end to Cold War thinking.

A speech in Hiroshima would be a perfect bookend to his Prague speech — but only if he uses the occasion to announce concrete steps he will take before he leaves office. The president must do more than give another passionate speech about nuclear disarmament. The world needs — indeed, is desperate for — concrete action.

Here’s what Mr. Obama should say in Hiroshima:

 

***

 

Thank you for your warm welcome.

I have come to Hiroshima to do several things. First, to recognize those who suffered the humanitarian atrocities of World War II throughout the Pacific region.

Second, to give special recognition to the survivors of the atomic bombings of Hiroshima and Nagasaki — the hibakusha — who have worked tirelessly to make sure those bombings remain the only use of nuclear weapons.

And third, to announce three concrete steps I will take as U.S. commander-in-chief to reduce the risk that nuclear weapons will be used again. These are steps along the path I laid out in Prague in 2009.

First, the United States will cut the number of nuclear warheads deployed on long-range forces below the cap of 1,550 in the New START treaty, down to a level of 1,000. This is a level, based on the Pentagon’s analysis, that I have determined is adequate to maintain U.S. security regardless of what other countries may do.

Second, I am cutting back my administration’s trillion-dollar plan to build a new generation of nuclear warheads, missiles, bombers, and submarines. I am beginning by canceling plans for the new long-range nuclear cruise missile, which I believe is unneeded and destabilizing.

Third, I am taking a step to eliminate one of the ultimate absurdities of our world: The most likely way nuclear weapons would be used again may be by mistake.

How is this possible? Let me explain.

Today the United States and Russia each keep many hundreds of nuclear-armed missiles on prompt-launch status — so-called “hair-trigger alert“ — so they can be launched in a matter of minutes in response to warning of an incoming nuclear attack. The warning would be based on data from satellites and ground-based radars, and would come from a computer.

This practice increases the chance of an accidental or unauthorized launch, or a deliberate launch in response to a false warning. U.S. and Russian presidents would have only about 10 minutes to decide whether the warning of an incoming attack was real or not, before giving the order to launch nuclear-armed missiles in retaliation — weapons that cannot be recalled after launch.

And history has shown again and again that the warning systems are fallible.Human and technical errors have led to mistakes that brought the world far too close to nuclear war. That is simply not acceptable. Accidents happen — they shouldn’t lead to nuclear war.

As a candidate and early in my presidency I recognized the danger and absurdity of this situation. I argued that “we should take our nuclear weapons off hair-trigger alert” because “keeping nuclear weapons ready to launch on a moment’s notice is a dangerous relic of the Cold War. Such policies increase the risk of catastrophic accidents or miscalculation.”

Former secretaries of defense as well as generals who oversaw the U.S. nuclear arsenal agree with me, as do science and faith leaders. In his recent book My Journey at the Nuclear Brink, former Secretary of Defense William Perry writes: “These stories of false alarms have focused a searing awareness of the immense peril we face when in mere minutes our leaders must make life-and-death decisions affecting the whole planet.”

General James Cartwright, former commander of U.S. nuclear forces, argues that cyber threats that did not exist during the Cold War may introduce new system vulnerabilities. A report he chaired last year states that “In some respects the situation was better during the Cold War than it is today. Vulnerability to cyber-attack … is a new wild card in the deck.”

And the absurdity may get even worse: China’s military is urging its government to put Chinese missiles on high alert for the first time. China would have to build a missile warning system, which would be as fallible as the U.S. and Russian ones. The United States should help Chinese leaders understand the danger and folly of such a step.

So today I am following through on my campaign pledge. I am announcing that the United States will take all of its land-based missiles off hair-trigger alert and will eliminate launch-on-warning options from its war plans.

These steps will make America — and the world — safer.

Let me end today as I did in Prague seven years ago: “Let us honor our past by reaching for a better future. Let us bridge our divisions, build upon our hopes, accept our responsibility to leave this world more prosperous and more peaceful than we found it. Together we can do it.”

Passing the Nuclear Baton

The following post was written by Joe Cirincione, President of the Ploughshares Fund, as part of our Huffington Post series on nuclear security.

President Obama entered office with a bold vision, determined to end the Cold War thinking that distorted our nuclear posture. He failed. He has a few more moves he could still make — particularly with his speech in Hiroshima later this month — but the next president will inherit a nuclear mess.

Obama had the right strategy. In his brilliant Prague speech, he identified our three greatest nuclear threats: nuclear terrorism, the spread of nuclear weapons to new states and the dangers from the world’s existing nuclear arsenals. He detailed plans to reduce and eventually eliminate all three, understanding correctly that they all must be tackled at once or progress would be impossible on any.

Progress Thwarting Nuclear Terror

Through his Nuclear Security Summits, Obama created an innovative new tool to raise the threat of nuclear terrorism to the highest level of global leadership and inspire scores of voluntary actions to reduce and secure nuclear materials. But it is, as The New York Times editorialized, “a job half done.” Instead of securing all the material in four years as originally promised, after eight years we still have 1,800 tons of bomb-usable material stored in 24 countries, some of it guarded less securely than we guard our library books.

If a terrorist group could get their hands on just 100 pounds of enriched uranium, they could make a bomb that could destroy a major city. In October of last year, anAP investigation revealed that nuclear smugglers were trying to sell weapons grade uranium to ISIS. Smugglers were overheard on wiretaps as saying that they wanted to find an ISIS buyer because, “they will bomb the Americans.”

More recently, we learned that extremists connected to the attacks in Paris and Belgium had also been videotaping a Belgian nuclear scientist, likely in the hopes of forcing “him to turn over radioactive material, possibly for use in a dirty bomb.”

Obama got us moving in the right direction, but when you are fleeing a forest fire, it is not just a question of direction but also of speed. Can we get to safety before catastrophe engulfs us?

Victory on Iran

His greatest success, by far, has been the agreement with seven nations that blocks Iran’s path to a bomb. This is huge. There are only two nations in the world with nuclear programs that threatened to become new nuclear-armed states: Iran and North Korea. North Korea has already crossed the nuclear Rubicon and we must struggle to see if we can contain that threat and even push them back. Thanks to the Iran agreement however, Iran can now be taken off the list.

For this achievement alone, Obama should get an “A” on his non-proliferation efforts. He is the first president in 24 years not to have a new nuclear nation emerge on his watch.

Bill Clinton saw India and Pakistan explode into the nuclear club in 1998. George W. Bush watched as North Korea set off its first nuclear test in 2006. Barack Obama scratched Iran from contention. Through negotiations, he reduced its program to a fraction of its original size and shrink-wrapped it within the toughest inspection regime ever negotiated. It didn’t cost us a dime. And nobody died. It is, by any measure, a major national security triumph.

Failure to Cut

Unfortunately Obama could not match these gains when it came to the dangers posed by the existing arsenals. The New START Treaty he negotiated with Russia kept alive the intricate inspection procedures previous presidents had created, so that each of the two nuclear superpowers could verify the step-by-step reduction process set in motion by Ronald Reagan and continued by every president since.

That’s where the good news ends. The treaty made only modest reductions to each nation’s nuclear arsenals. The United States and Russia account for almost 95 percent of all the nuclear weapons in the world, with about 7,000 each. The treaty was supposed to be a holding action, until the two could negotiate much deeper reductions. That step never came.

The “Three R’s” blocked the path: Republicans, Russians and Resistance.

First, the Republican Party leadership in Congress fought any attempt at reductions. Though many Republicans supported the treaty, including Colin Powell, George Shultz and Senator Richard Lugar, the entrenched leadership did not want to give a Democratic president a major victory, particularly in the election year of 2010. They politicized national security, putting the interest of the party over the interest of the nation. It took everything Obama had to finally get the treaty approved on the last day of the legislative session in December.

By then, the president’s staff had seen more arms control then they wanted, and the administration turned its attention to other pressing issues. Plans to “immediately and aggressively” pursue Senate approval of the nuclear test ban treaty were shelved and never reconsidered. The Republicans had won.

Worse, when Russia’s Vladimir Putin returned to power, Obama lost the negotiating partner he had had in President Medvedev. Putin linked any future negotiation to a host of other issues, including stopping the deployment of US anti-missile systems in eastern Europe, cuts in conventional forces and limits on long-range conventional strike systems the Russian claimed threatened their strategic nuclear forces. Negotiations never resumed.

Finally, he faced resistance from the nuclear industrial complex, including many of those he himself appointed to implement his policies. Those with a vested financial, organizational or political interest in the thousands of contracts, factories, bases and positions within what is now euphemistically call our “nuclear enterprise” will do anything they can to preserve those dollars, contracts and positions. Many of his appointees merely paid lip-service to the president’s agenda, paying more attention to the demands of the services, or the contractors or their own careers. Our nuclear policy is now less determined by military necessity or strategic doctrine, than by self-interest.

It is difficult to find someone who supports keeping our obsolete Cold War arsenal that is not directly benefiting from, or beholden to, these weapons. In a very strange way, the machines we built are now controlling us.

The Fourth Threat

To make matters worse, under Obama’s watch these three “traditional” nuclear threats have been joined by a fourth: nuclear bankruptcy.

Obama pledged in Prague that as he reduced the role and number of nuclear weapons in U.S. policy, he would maintain a “safe, secure and reliable” arsenal. He increased spending on nuclear weapons, in part to make much needed repairs to a nuclear weapons complex neglected under the Bush administration and, in part, to win New START votes from key senators with nuclear bases and labs in their states.

As Obama’s policy faltered, the nuclear contracts soared. The Pentagon has embarked on the greatest nuclear weapons spending spree in U.S. history. Over the next 30 years the Pentagon is planning to spend at least $1 trillion on new nuclear weapons. Every leg of the U.S. nuclear triad – our fleet of nuclear bombers, ballistic missile submarines, and ICBMs – will be completely replaced by a new generation of weapons that will last well into the later part of this century. It is a new nuclear nightmare.

What Should the Next President Do?

While most of us have forgotten that nuclear weapons still exist today, former Secretary of Defense Bill Perry warns that we “are on the brink of a new nuclear arms race” with all the perils, near-misses and terrors you thought ended with the Cold War. The war is over; the weapons live on.

The next president cannot make the mistake of believing that incremental change in our nuclear policies will be enough to avoid disaster. Or that appointing the same people who failed to make significant change under this administration, will somehow help solve the challenges of the next four years. There is serious work to be done.

We need a new plan to accelerate the elimination of nuclear material. We need a new strategy for North Korea. But most of all, we need a new strategy for America. It starts with us. As long as we keep a stockpile of nuclear weapons far in excess of any conceivable need, how can we convince other nations to give up theirs?

The Joint Chiefs told President Obama that he could safely cut our existing nuclear arsenal and that we would have more than enough weapons to fulfill every military mission. It did not matter what the Russians did. If they cut or did not cut, honored the New START Treaty or cheated. We could still cut down to about 1000 to 1100 strategic weapons and still handle every contingency.

The next president should do that. Not just because it is sound strategic policy – but because it is essential financial policy too. We are going broke. We do not have enough money to pay for all the weapons the Pentagon ordered when they projected ever-rising defense budgets. “There’s a reckoning coming here,” warns Rep. Adam Smith, the ranking Democrat on the House Armed Services Committee. “Do we really need the nuclear power to destroy the world six, seven times?”

The Defense Department admits it does not have the money to pay for these plans. Referring to the massive ‘bow wave‘ of spending set to peak in the 2020s and 2030s, Pentagon Comptroller Mike McCord said “I don’t know of a good way for us to solve this issue.”

In one of more cynical admissions by a trusted Obama advisor, Brian McKeon, the principal undersecretary of defense for policy, said last October, “We’re looking at that big [nuclear] bow wave and wondering how the heck we’re going to pay for it,” And we’re “probably thanking our stars we won’t be here to have to answer the question,” he added with a chuckle.

He may think it’s funny now, but the next president won’t when the stuff hits the fan in 2017. One quick example: The new nuclear submarines the Navy wants will devour half of the Navy’s shipbuilding budget in the next decade. According to the Congressional Research Service, to build 12 of these new subs, “the Navy would need to eliminate… a notional total of 32 other ships, including, notionally, 8 Virginia-class attack submarines, 8 destroyers, and 16 other combatant ships.”

These are ships we use every day around the world on real missions to deal with real threats. They launch strikes against ISIS, patrol the South China Sea, interdict pirates around the horn of Africa, guarantee the safety of international trade lanes, and provide disaster relief around the globe.

The conventional navy’s mission is vital to international security and stability. It is foolish, and dangerous, to cut our conventional forces to pay for weapons built to fight a global thermonuclear war.

Bottom-Up

The next President could do a bottom-up review of our nuclear weapons needs. Don’t ask the Pentagon managers of these programs what they can cut. You know the answer you will get. Take a blank slate and design the force we really need.

Do we truly need to spend $30 billion on a new, stealthy nuclear cruise missile to put on the new nuclear-armed stealth bomber?

Do we truly need to keep 450 intercontinental ballistic missiles, whose chief value is to have the states that house them serve as targets to soak up so many of the enemy’s nuclear warheads that it would “complicate an adversary’s attack plans?” Do Montana and Wyoming and North Dakota really want to erect billboards welcoming visitors to “America’s Nuclear Sponge?”

If President Trump, or Clinton, or Sanders put their trust in the existing bureaucracy, it will likely churn out the same Cold War nuclear gibberish. It will be up to outside experts, scientists, retired military and former diplomats to convince the new president to learn from Obama’s successes and his failures.

Obama had the right vision, the right strategy. He just didn’t have an operational plan to get it all done. It is not that hard, if you have the political will.

Over to you next POTUS.

EA Global X Boston Conference

The first EA Global X conference, EAGxBoston, is being held at MIT on April 30th, 12:30-6:30pm. Boston EAs have created an incredible lineup bringing together a who’s who of researchers, EAs, EA orgs, and up-and-coming orgs including:
Dean Karlan (Yale, Innovations for Poverty Action)
Joshua Greene (Harvard, Moral Cognition Lab)
Rachel Glennerster (MIT, Poverty Action Lab)
Piali Mukhopadhyay (GiveDirectly)
Bruce Friedrich (The Good Food Institute)
Julia Wise (The Centre for Effective Altruism)
Ian Ross (Hampton Creek, Facebook)
Allison Smith (Animal Charity Evaluators)
Elizabeth Pearce (Boston University, Iodine Global Network)
Cher-Wen DeWitt (One Acre Fund)
Rhonda Zapatka (Trickle Up)
Elijah Goldberg (ImpactMatters)
Jason Ketola (MaxMind)
Lucia Sanchez (Innovations for Poverty Action)
Sharon Nunez Gough (Animal Equality)
Bruce Friedrich (The Good Food Institute, New Crop Capital)
Jon Camp (The Humane League)
Victoria Krakovna (Harvard, Future of Life Institute)
Eric Gastfriend (Harvard Business School EA, FLI, and formerly 80,000 Hours)
Dillon Bowen (Tufts EA, formerly 80,000 Hours and Giving What We Can)
Jason Trigg (earning-to-give at a startup and formerly as a hedge fund quant)
and more

The day will be filled with talks, panels, and networking opportunities. The program will address the major effective altruist cause areas of global health poverty and development, animal agriculture, and global catastrophic risk, as well as movement concerns like conducting research, building community, and choosing a career direction. We will also be introducing some up-and-coming organizations.

FLI’s Victoria Krakovna, Richard Mallah, and Lucas Perry participated in a panel about Global Catastrophic Risks.

More information and registration can be found on the conference website:
http://eagxboston.com

All proceeds after our minimum costs will be donated to EA charities. If you need a tax-receipt, please contact Randy Carlton <[masked]>. Please note that the early bird special ends on April 19th.

We have a limited amount of space, so if you’d like to join, please register today and share this invitation with interested friends via our Facebook group:
https://www.facebook.com/EAGxBoston/

Let’s get together, and learn what we can do even better together!

EAGxBoston Team from MIT Sloan EA, MIT EA, Tufts EA, Harvard EA, HBS EA, Animal Charity Evaluators and The Commonwealth Market
http://eagxboston.com

Climate Change for the Impatient: A Nuclear Mini Ice Age

Everyone has heard about climate change caused by fossil fuels, which threatens to raise Earth’s average surface temperature by about 3-5°C by the year 2100 unless we take major steps toward mitigation. But there’s an eerie silence about the other major climate change threat, which might lower Earth’s average surface temperature by 7°C: a decade-long mini ice age caused by a U.S.-Russia nuclear war.

This is colder than the 5°C cooling we endured 20,000 years ago during the last ice age. The good news is that, according to state-of-the-art climate models by Alan Robock at Rutgers University, a nuclear mini ice age would be rather brief, with about half of the cooling gone after a decade. The bad news is that this more than long enough for most people on Earth to starve to death if farming collapses. Robock’s all-out-war scenario shows cooling by about 20°C (36°F) in much of the core farming regions of the U.S., Europe, Russia and China (by 35°C in parts of Russia) for the first two summers — you don’t need to be a master farmer to figure out what freezing summers would do to food supply. It’s hard to predict exactly how devastating this famine would be if thousands of Earth’s largest cities were reduced to rubble and global infrastructure collapsed, but whatever small fraction of all humans don’t succumb to starvation, hypothermia or epidemics would need to cope with roving, armed gangs desperate for food.

What a nuclear mini ice age might look like.

Average cooling (in °C) during the first two summers after a full-scale nuclear war between the US and Russia (from Robock et al 2007).

Unless we take stronger action than there’s current political will for, we’re likely to face both dramatic fossil-fuel climate change and dramatic nuclear climate change within a century, give or take. Since no politician in their right mind would launch global nuclear Armageddon on purpose, the nuclear war triggering the mini ice age will most likely start by accident or miscalculation. This has has almost happened many times in the past, as this timeline shows. The annual probability of accidental nuclear war is poorly known, but it certainly isn’t zero: John F. Kennedy estimated the probability of  the Cuban Missile Crisis escalating to war between 33 percent and 50 percent. We know that near-misses keep occurring regularly, and there are probably many more close calls than haven’t been declassified. Simple math shows that even if the annual risk of global nuclear war is as low as 1 percent, we’ll probably have one within a century and almost certainly within a few hundred years. We just don’t know exactly when — it could be the day your great granddaughter gets married, or it could be next Tuesday when the Russian early-warning system suffers an unfortunate technical malfunction.

The science behind nuclear climate change is rather simple. Smoke from small fires doesn’t rise as high as the highest rain clouds, so rain washes the smoke away before too long. In contrast, massive firestorms from burning nuked cities can rise into the upper stratosphere, many times higher than commercial jet planes fly. There are no clouds that high (have you ever seen a cloud above you when peering out of your plane window at cruising altitude?), and for this reason, the firestorm smoke never gets rained out. Moreover, this smoke absorbs sunlight and heats up, allowing it to get lofted to even higher altitudes where it might stay for approximately a decade, soon spreading around the globe to cover both the U.S. and Russia even if only one of the two got nuked. Since much of the solar heat absorbed by the smoke gets radiated back into space instead of warming the ground, nuclear winter ensues if there’s enough smoke.

Just as with fossil-fuel climate change, nuclear climate change involves interesting uncertainties that deserve further research. For example how much smoke gets lofted to various altitudes in different scenarios? But whereas fossil-fuel climate research gets significant funding and press coverage, nuclear climate change gets neither. Part of the reason is probably that we can already start seeing effects of fossil-fuel climate change, whereas nuclear climate change arrives like ketchup out of a shaken glass bottle: nothing, nothing, nothing, and then way more than you wanted.

We should start treating both kinds of climate change with comparable respect, since there’s currently no convincing scientific case for nuclear climate change being a negligible threat compared to fossil-fuel climate change: the size of the temperature change can be comparable, the time until it gets dramatic can be comparable, and the nuclear version might wreak even greater havoc than the fossil-fuel version by being less gradual and leaving society less time to adapt.

Nuclear climate change is better than its fossil-fuel cousin if you’re impatient and like instant gratification. To end on a positive note, nuclear climate change also has the advantage of being an easier problem to solve. Whereas halving carbon emissions is quite difficult to accomplish, halving expected nuclear climate change is as simple as halving nuclear arsenals. Many military analysts agree that 300-1000 nuclear weapons suffice for extremely effective deterrence, and all but two nuclear powers have chosen to stay below that range. Yet the U.S. and Russia are currently hoarding about 7,000 each, and appear to be starting a new nuclear arms race. The U.S. is planning to spend $4 million per hour for the next 30 years making its nukes more lethal, which even former Secretary of Defense William Perry argues will make us less safe. Trimming our nuclear excess could not only free up a trillion dollars for other spending, but would be a huge victory in our battle against climate change.

This post is part of a series produced by The Huffington Post and Future of Life Institute (FLI) on nuclear security. It was originally posted here.

Computers Gone Wild: Impact and Implications of Developments in Artificial Intelligence on Society

The following summary was written by Samantha Bates:

The second “Computers Gone Wild: Impact and Implications of Developments in Artificial Intelligence on Society” workshop took place on February 19, 2016 at Harvard Law School.  Marin Soljačić, Max Tegmark, Bruce Schneier, and Jonathan Zittrain convened this informal workshop to discuss recent advancements in artificial intelligence research.  Participants represented a wide range of expertise and perspectives and discussed four main topics during the day-long event: the impact of artificial intelligence on labor and economics, algorithmic decision-making, particularly in law, autonomous weapons, and the risks of emergent human-level artificial intelligence. Each session opened with a brief overview of the existing literature related to the topic from a designated participant, followed by remarks from two or three provocateurs.  The session leader then moderated a discussion with the larger group. At the conclusion of each session, participants agreed upon a list of research questions that require further investigation by the community. A summary of each discussion as well as the group’s recommendations for additional areas of study are included below.

Session #1: Labor and Economics

Summary

The labor and economics session focused on two main topics: the impact of artificial intelligence on labor economics and the role of artificial intelligence in financial markets.  Participants observed that capital will eventually displace labor and that the gradual bifurcation of the market into the financial sector and the real economy will accelerate and exacerbate the impact of AI on capital concentration.  While the group seemed to be persuaded that current unemployment rates suggest that artificial intelligence is not yet replacing human workers to a great extent, participants discussed how artificial intelligence may cause greater social inequality as well as decrease the quality of work in the gig economy specifically.  The group also discussed how recent events in the financial market, such as the flash crashes in 2010 and 2014, exemplify how the interaction between technology, specifically high frequency trading, and human behavior can lead to greater instability in the market.[1]

On the impact of AI on labor economics, the group agreed that social inequality, particularly when it prevents social mobility, is concerning. For example, the growth of technology has expanded student access to information regarding college applications and scholarships, which increases competition among students.  Consequently, colleges and scholarship boards may overlook candidates who have the necessary skill set to succeed due to the growing size of the applicant pool.  While some social inequality is necessary to motivate students to work harder, increased competition may also take away opportunities from students who do not have the financial means to pay for college on their own.  Participants expressed concern that rather than creating opportunities for all users and helping society identify talent, technology may at times reinforce existing socioeconomic structures.

Similar to the college application example, the growth of technology has started to re-organize labor markets by creating more “talent markets” or jobs that require a very specific skill set.  Workers who possess these skills are highly sought after and well paid, but the current system overlooks many of these candidates, which can lead to fewer opportunities for individuals from the working classes to climb the social ladder.  Participants did not agree about the best way to solve the problem of social immobility.  Several participants emphasized the need for more online education and certification programs or a restructuring of the education system that would require workers to return to school periodically to learn about advancements in technology.  Others pointed out that we may be moving towards a society in which private companies start hoarding information, so online education programs and/or restructuring the education system will not make a difference when information is no longer widely available.

During the discussion about high frequency trading and flash crashes, the group struggled to determine what we should be most concerned about.  We do not fully understand why flash crashes occur, but past examples suggest that flash crashes are caused by a “perfect financial storm” in which a combination of technical flaws and malicious actors trying to manipulate the system happen all at the same time.[2]  Several participants expressed surprise that flash crashes do not occur more frequently and that the market is able to recover so quickly.  In response, it was pointed out that while the market appears to recover quickly, the impact of these short drops in the market can prove devastating for individual investors.  For example, we heard about a librarian who lost half of her retirement savings in three minutes.

Ultimately, participants agreed that policy regulation will play a major role in managing the impact of artificial intelligence on labor and the financial market.  For example, it was suggested that time limits on high frequency trading could prevent rapid-fire transactions and reduce the risk of causing a flash crash.  Other areas identified for further research included investigating whether there are flash trips rather than crashes that do not cause as much damage and are therefore overlooked and if we have a model to predict how frequently the system will trip.  Participants also agreed that the financial system is an important area for further research because it serves as a predictor of problems that may arise in other sectors.  There was also discussion of pros and cons of different approaches to reducing income inequality through redistribution, ranging from guaranteed income and negative income tax to provision of subsidized education, healthcare, and other social services.  Lastly, participants agreed that we should define what the optimal education system would look like and how it could help solve the social immobility problem.

Questions and Conclusions

  • Concluded that the financial system is a good measure of what the major AI challenges will be in other sectors because it changes quickly and frequently as well as has a large impact on people’s lives.
  • The group discussed whether we should be concerned that flash crashes are largely unexplained. One research area to explore is whether there are flash trips rather than crashes, which do not cause as much damage and are therefore, more difficult to detect.
    • Do we have any models to predict how frequently the system will trip aside from models of complex systems?
  • Since 2011, there has been no progress in mitigating the risks of high frequency trading. Maybe we should call attention to this problem, especially since we have no explanation for why flash crashes occur.
    • Why have we not solved the flash crash problem by setting time limitations that prevent rapid-fire transactions? This may be a policy issue rather than a tech problem.
    • Given that the algorithms are not perfect, should the financial market incorporate more human oversight?
  • What education model is an optimal preparation for a society in which AI is very advanced and getting rapidly and exponentially better: is it liberal-arts? Is our educational model (~20yrs of education/specialization followed by 40yrs of specialized work) outdated? [E.g. would it for example be more reasonable that people work for a few years, then go back to school for a year, then work for a few years, etc.?]
  • Does AI affect “financial democracy” (will everyone have access to investment and participate in the appreciation of financial assets (e.g. stock or bond market), or will only those individuals/institutions which have most powerful finance-AI capture all the profits, leaving retirement funds with no growth prospects)?
  • Many developing countries relied on export-based growth models to drag themselves out of poverty, exploring their cheap manufacturing. If robots and AI in developed countries replace cheap manufacturing coming from the undeveloped countries, how will poor countries ever pull themselves out of poverty?
  • When and in what order should we expect various jobs to become automated, and how will this impact income inequality?
  • What policies could help increasingly automated societies flourish? How can AI-generated wealth best support underemployed populations?  What are the pros and cons of interventions such as educational reform, apprenticeships programs, labor-demanding infrastructure projects, income redistribution and the social safety net?
  • What are potential solutions to the social immobility problem?
    • Can we design an education system that solves social immobility problems? What is the optimal education model?  Is education the best way to solve this problem?
    • Maybe we should investigate where we should invest our greatest resources. Where should we direct our brightest minds and greatest efforts?
    • How do we encourage the private sector to share knowledge rather than holding onto trade secrets?

Session #2: Algorithms and law

Summary

The discussion mainly focused on the use of algorithms in the criminal justice system.  More specifically, participants considered the use of algorithms to determine parole for individual prisoners, predict the location where the greatest number of crimes will be committed and which individuals are most likely to commit a crime or become a victim of a crime, as well as decide bail for prisoners.

Algorithmic bias was one major concern expressed by participants.  Bias is caused by the use of certain data types such as race, criminal history, gender, and socioeconomic status, to inform an algorithmic output.  However, excluding these data types can make the bias problem worse because it means that outputs are determined by less data.  Additionally, as certain data types are linked and can be used to infer other characteristics about an individual, eliminating specific data types from the dataset may not be sufficient to overcome algorithmic bias. Eliminating data types, criminal history in particular, may also counteract some of the basic aims of the criminal justice system, such as discouraging offenders from committing additional crimes in the future.  Judges consider whether prisoners are repeat offenders when determining sentences.  How should the algorithm simulate this aspect of judicial decision-making if criminal history is eliminated from the dataset?  Several members of the group pointed out that even if we successfully created an unbiased algorithm, we would need to require the entire justice system to use the algorithm in order to make the system fair.  Without universal adoption, certain communities may be treated less fairly than others.  For example, what if the wealthier communities refused to use the algorithm and only disadvantaged groups were evaluated by the algorithm?  Other participants countered that human judges can be unfair and biased, so an algorithm that at least ensures that judicial opinions are consistent may benefit everyone.  An alternative solution suggested was to introduce the algorithm as an assist tool for judges, similar to a pilot assist system, to act as a check or red flag on judicial outcomes.

Another set of concerns expressed by participants involved issues related to human perceptions of justice, accountability, and legitimacy.  The conversation touched on the human need for retribution and feeling that justice is served.  Similarly, the group talked about ways in which the use of algorithms to make or assist with judicial decisions may shift accountability away from the judge. If the judge does not have any other source of information on which to challenge the output of the algorithm, he or she may rely on the algorithm to make the decision.  Furthermore, if the algorithmic output intimates an official seal of acceptance, the judge will not as readily be held accountable for his or her decisions. Another concern was the possibility that, by trusting the technology to determine the correct outcome of a given case, the judge will become less personally invested in the courts system, which will ultimately undermine the legitimacy of the courts.

The other major topic discussed was the use of algorithms to inform crime prevention programs. The Chicago Police Department uses algorithms to identify individuals likely to commit a crime or become a victim of a crime and proactively offers these individuals support from social services.[3]  A similar program in Richmond, CA provides monetary incentives to encourage good behavior among troubled youth in the community.[4]  Unlike the Chicago Police Department, this program relies on human research rather than an algorithm to identify individuals at the greatest risk.  The group discussed how predictive algorithms in this case could automate the identification process.  However, the group also considered whether the very premise of these programs is fair as, in both examples, the state offers help only to a specific group of people.  Again, bias might influence which individuals are targeted by these programs.

The group concluded that algorithms can be most useful if built to leverage human abilities and act as a “tool” rather than a “friend.” More specifically, participants agreed that there needs to be greater transparency about how a given algorithm works and what types of data are used by the algorithm to make determinations. Aside from the legitimacy issue and the possibility that an assist program would substitute for human decisions, participants seemed to feel more comfortable with judges using algorithms as an assist tool if there is transparency about how the process worked and if the algorithm is required to explain its reasoning.  Furthermore, several participants made the point that transparency about the decision-making process may discourage future crime.

Questions and Conclusions

  • The group concluded that algorithms can be most useful if built to leverage human abilities and act as a “tool” rather than a “friend.”
    • Participants seemed to feel more comfortable with judges using algorithms as an assist tool if there is transparency about how the process worked and if the algorithm is required to explain its reasoning.
    • How do we make AI systems more transparent?
  • What data should we use to design algorithms? How can we eliminate bias in algorithms without compromising the effectiveness of the technology?
  • How can we design algorithms that incorporate human elements of decision-making such as accountability and a sense of justice?
  • Should we invest in designing algorithms that evaluate evidence? How do we apply the evidence data available to a given case?

Session #3: Autonomous Weapons

Summary

The group defined AWS as “a weapon system that is capable of independently selecting and engaging targets.”  However, determining which weapons should be considered autonomous led to some disagreement within the group, especially since there are weapons currently used by the military that some would have considered autonomous in the recent past but no longer do (while drones do not select and engage targets, aspects of drones as well as certain types of missiles (e.g. heat-speaking missiles), are autonomous).

There was discussion about how best to regulate AWS or whether we should ban the development and use of this technology in the United States or internationally.  The overarching question that emerged was whether we are concerned about AWS because they are “too smart” and we worry about their capabilities in the future or if we are concerned because the technology is “too stupid” to comply with the laws of war.  For example, some experts in the field have questioned whether existing legal requirements that outline the process for determining the proportionality and necessity of an attack can be automated.  A poll of the room demonstrated that while almost all participants wanted to encourage governments to negotiate an international treaty banning the use of lethal autonomous weapons, there was one participant who thought that there should not be any restrictions on AI development, and one participant who thought that the U.S. should not pursue AWS research regardless of whether other governments agreed to negotiate international regulations.  An additional participant was conflicted about whether the U.S. should ban AWS research altogether or should negotiate a set of international regulations, so his or her vote was counted in both categories. Several participants reasoned that a ban would not be effective because AWS is likely to benefit non-state actors. Others argued that the absence of an international ban would trigger an AWS arms race that would ultimately strengthen non-state actors and weaken state actors, because mass-produced AWS would become cheap and readily available on the black market.  Those in favor of a ban also highlighted the moral, legal, and accountability concerns raised by delegating targeting determinations to machines.

One person argued that from a defensive perspective, the U.S. has no choice but to develop autonomous weapons because other actors (state and non-state) will continue to develop this technology. Therefore, it was suggested that the U.S. restrict research to only defensive AWS.  However, even if the U.S. develops only defensive AWS, there may still be undesirable second order effects that will be impossible to anticipate.  In addition, as the leader of the market, the U.S., by refusing to continue AWS development, may be able to encourage other nations to abandon this area of AI research as well.

Despite differing opinions on how best to regulate AWS, the group agreed that enforcement of these regulations may be difficult. For example, it may be impossible to enforce restrictions when only the deployer knows whether the weapon is autonomous.  Furthermore, would one hold the individual deployer or the nation state responsible for the use of autonomous weapons? If we choose to hold nation states accountable, it will be difficult to regulate the development and use of AWS because we may not be able to trust that other countries will comply with regulations.  It was pointed out that the regulation would also need to take into account that technology developed for civilian uses could potentially be re-purposed for military uses.  Last, several participants argued that regulatory questions are not a large concern because past examples of advanced weaponry that may have previously been defined as autonomous demonstrate that the U.S. has historically addressed regulatory concerns on a case-by-case basis.

The costs and benefits of using AWS in the military also generated a discussion that incorporated multiple perspectives.  Several participants argued that AWS, which may be more efficient at killing, could decrease the number of deaths in war. How do we justify banning AWS research when so many lives could be preserved?  On the other side, participants reasoned that the loss of human life discourages nations from entering into future wars. In addition, participants considered the democratic aspect of war.  AWS may create an uneven power dynamic in democracies by transferring the power to wage war from the general public to the politicians and social elite.  There was also a discussion about whether the use of AWS would encourage more killing because it eliminates the messy and personal aspects from the act of killing.  Some participants felt that historical events, such as the genocide in Rwanda, suggest that the personal aspects of killing do not discourage violence and therefore, the use of AWS would not necessarily decrease or increase the number of deaths.

Lingering research questions included whether AWS research and use should be banned in the U.S. or internationally and if not, whether research should be limited to defensive AWS.  Additionally, if there was an international treaty or body in charge of determining global regulations for the development and use of AWS, what would those regulations look like and how would they be enforced?  Furthermore, should regulations contain stipulations about incorporating certain AWS features, such as kill switches and watermarks, in order to address some of the concerns about accountability and enforcement?  And regardless of whether AWS are banned, how should we account for the possibility that AI systems intended for civilian uses may be repurposed for military uses?

Questions and Conclusions

  • Most participants (but not all) agreed that the world powers should draft some form of international regulations to guide AWS research and use.
    • What should the terms of the agreement be? What would a ban or regulations look like? How would the decision-makers reach consensus?
    • How would regulations be enforced, especially since non-state actors are likely to acquire and use AWS?
    • How do we hold nation states accountable for AWS research and development? Should states be held strictly liable, for example?
  • Alternatively, do we already have current regulations that account for some of the concerns regarding AWS research? Should the U.S. continue to evaluate weapon development on a case-by-case basis?
  • Should certain types of AWS development and use be banned? Only offensive? Only lethal weapons? Would a ban be effective?
    • Should we require AWS to have certain safety/defensive features, such as kill switches?
  • Even if the U.S. limits research to defensive AWS, how do we account for unanticipated outcomes that may be undesirable? For example, how do we define whether a weapon is defensive?  What if a “defensive” weapon is sent into another state’s territory and fires back when it is attacked? Is it considered an offensive or defensive weapon?
  • How do we prevent military re-purposing of civilian AI?
  • Need to incorporate more diverse viewpoints in this conversation. We need more input from lawyers, philosophers, and economists to solve these problems.

Session #4: The Risks of Emergent Human-Level AI

Summary

The final session featured a presentation about ongoing research funded by the Future of Life Institute, which focuses on ensuring that any future AI with super-human intelligence will be beneficial. Group members expressed a variety of views about when, if ever, we will successfully develop human-level AI.  It was argued that there will be many issues to resolve in order to converge current AI technology, which is limited to narrow AI, to form an advanced, general AI system.  Conversely, several participants argued that while the timeline remains uncertain, human-level AI is achievable.  They emphasized the need for further research in order to prepare for the social, economic, and legal consequences of this technology.

The second major point debated was the potential dangers of human-level AI.  Participants disagreed about the likelihood that advanced AI systems will have the ability to act beyond the control of human creators.  In response, other group members explained that an advanced, general AI system may be harder to control because it is designed to learn new tasks, rather than follow a specific set of instructions like narrow AI systems, and therefore, may act out of self-preservation or the need for resources to complete a task.  Given the difference between general AI and narrow AI, it will be important to consider the social and emotional aspects of general AI systems when designing the technology.  For example, there was some discussion about the best way to build a system that allows the technology to learn and respect human values.  At the same time, it was pointed out that current AI systems that do not possess human-level intelligence could also pose a threat to humans if released “into the wild.”  Similar to animals, AI systems with code that can be manipulated by malicious actors or have unforeseeable and unintended consequences may prove equally dangerous to humans.  It was suggested that some of the recommendations for overseeing and maintaining control of human-level AI be applied to existing AI systems.

In conclusion, the group identified several areas requiring additional research including the need to define robotic and AI terminology, including vocabulary related to the ethics of safety design.  For example, should we define a robot as a creature with human-like characteristics or as an object or assist tool for humans?  Experts from different disciplines such as law, philosophy and computer science will require a shared vocabulary in order to work together to devise solutions to a variety of challenges that may be created by advanced AI.  Additionally, as human-level AI can quickly become super-human AI, several participants emphasized the need for more research in this field related to safety, control, and alignment of AI values with human values.  It was also suggested that we investigate whether our understanding of sociability is changing as it may impact how we think about the social and emotional elements of AI.  Lastly, lawyers should investigate liability and tort law in order to determine whether there are guiding ethical principles that should be incorporated into future AI systems.

Questions and Conclusions

  • Group displayed the full spectrum of perspectives about the timeline and the potential outcomes of human-level AI. A large number of questions remain that still need to be considered.
  • Group identified the need to define robotic and AI terminology, including vocabulary related to the ethics of safety design. For example, should we define a robot as a creature or as an object?  Experts from different disciplines such as law, philosophy and computer science will require a shared vocabulary in order to work together to devise solutions to a variety of challenges that may be created by advanced AI.
  • The group considered the social and emotional aspects of AI systems. What is the nature of a collaborative relationship between a human and a machine? We should investigate whether our understanding of sociability is changing as it may impact how we think about the social and emotional elements of AI.
  • How should we design legal frameworks to oversee research and use of human-level intelligence AI?
  • Lawyers should investigate liability and tort law in order to determine whether there are guiding ethical principles that should be incorporated into future AI systems.
    • How can we build the technology in a way that can adapt as human values, laws, and social norms change over time?

Participants:

  • Ryan Adams – Assistant Professor of Computer Science, School of Engineering and Applied Sciences, Harvard University.
  • Kenneth Anderson – Professor of Law, American University, and visiting Professor of Law, Harvard Law School (spring 2016).
  • Peter Asaro – Assistant Professor, School of Media Studies, The New School.
  • David Autor – Professor and Associate Department Head, Department of Economics, MIT.
  • Cynthia Breazeal – Associate Professor of Media Arts and Sciences, MIT.
  • Rebecca Crootof – Ph.D. in Law candidate at Yale Law School and a Resident Fellow with the Yale Information Society Project, Yale Law School.
  • Kate Darling – Research Specialist at the MIT Media Lab and a Fellow at the Berkman Center for Internet & Society, Harvard University.
  • Bonnie Docherty – Lecturer on Law and Senior Clinical Instructor, International Human Rights Clinic, Harvard Law School.
  • Peter Galison – Joseph Pellegrino University Professor, Department of the History of Science, Harvard University.
  • Viktoriya Krakovna – Doctoral Researcher at Harvard University, and a co-founder of the Future of Life Institute.
  • Andrew W. Lo – Charles E. and Susan T. Harris Professor, Director, MIT Laboratory for Financial Engineering, MIT.
  • Richard Mallah – Director of AI Projects at the Future of Life Institute.
  • David C. Parkes – Harvard College Professor, George F. Colony Professor of Computer Science and Area Dean for Computer Science, School of Engineering and Applied Sciences, Harvard University.
  • Steven Pinker – Johnstone Family Professor, Department of Psychology, Harvard University.
  • Lisa Randall – Frank B. Baird, Jr., Professor of Science, Department of Physics, Harvard University.
  • Susanna Rinard – Assistant Professor of Philosophy, Department of Philosophy, Harvard University.
  • Cynthia Rudin – Associate Professor of Statistics, MIT Computer Science and Artificial Intelligence Lab and Sloan School of Management, MIT.
  • Bruce Schneier – Security Technologist, Fellow at the Berkman Center for Internet & Society, Harvard University, and Chief Technology Officer of Resilient Systems, Inc.
  • Stuart Shieber – James O. Welch, Jr. and Virginia B. Welch Professor of Computer Science, School of Engineering and Applied Sciences, Harvard University.
  • Marin Soljačić – Professor of Physics, Department of Physics, MIT.
  • Max Tegmark – Professor of Physics, Department of Physics, MIT.
  • Jonathan Zittrain – George Bemis Professor of International Law, Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science, Harvard School of Engineering and Applied Sciences, Vice Dean, Library and Information Resources, Harvard Law School, and Faculty Director of the Berkman Center for Internet & Society.

[1] “One big, bad trade,” The Economist Online, October 1, 2010, http://www.economist.com/blogs/newsbook/2010/10/what_caused_flash_crash. See also Pam Martens and Russ Martens, “Treasury Flash Crash of October 15, 2014 Still Has Wall Street in a Sweat,” April 9, 2015, http://wallstreetonparade.com/2015/04/treasury-flash-crash-of-october-15-2014-still-has-wall-street-in-a-sweat/.

[2] In 2015, authorities arrested a trader named Navinder Singh Sarao, who contributed to the flash crash that occurred on May 6, 2010.  See Nathaniel Popper and Jenny Anderson, “Trader Arrested in Manipulation That Contributed to 2010 ‘Flash Crash’” April 21, 2015, http://www.nytimes.com/2015/04/22/business/dealbook/trader-in-britain-arrested-on-charges-of-manipulation-that-led-to-2010-flash-crash.html.

[3] Matt Stroud, “The minority report: Chicago’s new police computer predicts crimes, but is it racist?” The Verge, February 19, 2014,  http://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist.

[4] Tim Murphy, “Did This City Bring Down Its Murder Rate by Paying People Not to Kill?” Mother Jones, July/August 2014, http://www.motherjones.com/politics/2014/06/richmond-california-murder-rate-gun-death.

Science, Religion, and Obama’s Mixed Legacy on Nuclear Weapons

In April 2009 in Prague, President Obama highlighted the continuing risks posed by nuclear weapons. He promised to “take concrete steps” to reduce those risks and “put an end to Cold War thinking.”

Since then, the administration has in fact taken some positive steps, including concluding the Iran nuclear deal, and the New START treaty that reduces U.S. and Russian deployed nuclear forces.

But it has also taken some negative steps, such as planning for a $1 trillion program to completely rebuild the U.S. nuclear arsenal over the next three decades and to build new types of nuclear warheads. This is classic Cold War thinking, and is the kind of step that fuels an arms race.

Cloud from the Nagasaki bomb. (Source: National Archives)

Cloud from the Nagasaki bomb. (Source: National Archives)

Former Secretary of Defense Bill Perry recently warned, “The danger of a nuclear catastrophe today, I believe, is greater than it was during the Cold War.” Things are moving in the wrong direction, and the administration needs to take some positive steps—and soon.

In response to this situation, UCS joined with several faith groups to call for something we all agree on: President Obama should take new steps to reduce the danger posed by nuclear weapons and a new arms race. In particular, we jointly call for:

The president is reportedly considering a visit to Hiroshima when he is in Japan for the G7 meeting at the end of May, to highlight the humanitarian consequences of using nuclear weapons.

But giving another speech is not enough. The president should announce concrete steps, picking up the work he started in Prague.

The science-faith statement was signed by

 

This article was originally posted on the Union of Concerned Scientists blog, and the Spanish version of the statement can be found here.