Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants?

In the early 1900s, the Italian chemist Giacomo Ciamician recognized that fossil fuel use was unsustainable. And like many of today’s environmentalists, he turned to nature for clues on developing renewable energy solutions, studying the chemistry of plants and their use of solar energy. He admired their unparalleled mastery of photochemical synthesis—the way they use light to synthesize energy from the most fundamental of substances—and how “they reverse the ordinary process of combustion.”

In photosynthesis, Ciamician realized, lay an entirely renewable process of energy creation. When sunlight reaches the surface of a green leaf, it sets off a reaction inside the leaf. Chloroplasts, energized by the light, trigger the production of chemical products—essentially sugars—which store the energy such that the plant can later access it for its biological needs. It is an entirely renewable process; the plant harvests the immense and constant supply of solar energy, absorbs carbon dioxide and water, and releases oxygen. There is no other waste.

If scientists could learn to imitate photosynthesis by providing concentrated carbon dioxide and suitable catalyzers, they could create fuels from solar energy. Ciamician was taken by the seeming simplicity of this solution. Inspired by small successes in chemical manipulation of plants, he wondered, “does it not seem that, with well-adapted systems of cultivation and timely intervention, we may succeed in causing plants to produce, in quantities much larger than the normal ones, the substances which are useful to our modern life?”

In 1912, Ciamician sounded the alarm about the unsustainable use of fossil fuels, and he exhorted the scientific community to explore artificially recreating photosynthesis. But little was done. A century later, however, in the midst of a climate crisis, and armed with improved technology and growing scientific knowledge, his vision reached a major breakthrough.

After more than ten years of research and experimentation, Peidong Yang, a chemist at UC Berkeley, successfully created the first photosynthetic biohybrid system (PBS) in April 2015. This first-generation PBS uses semiconductors and live bacteria to do the photosynthetic work that real leaves do—absorb solar energy and create a chemical product using water and carbon dioxide, while releasing oxygen—but it creates liquid fuels. The process is called artificial photosynthesis, and if the technology continues to improve, it may become the future of energy.

How Does This System Work?

Yang’s PBS can be thought of as a synthetic leaf. It is a one-square-inch tray that contains silicon semiconductors and living bacteria; what Yang calls a semiconductor-bacteria interface.

In order to initiate the process of artificial photosynthesis, Yang dips the tray of materials into water, pumps carbon dioxide into the water, and shines a solar light on it. As the semiconductors harvest solar energy, they generate charges to carry out reactions within the solution. The bacteria take electrons from the semiconductors and use them to transform, or reduce, carbon dioxide molecules and create liquid fuels. In the meantime, water is oxidized on the surface of another semiconductor to release oxygen. After several hours or several days of this process, the chemists can collect the product.

With this first-generation system, Yang successfully produced butanol, acetate, polymers, and pharmaceutical precursors, fulfilling Ciamician’s once-far-fetched vision of imitating plants to create the fuels that we need. This PBS achieved a solar-to-chemical conversion efficiency of 0.38%, which is comparable to the conversion efficiency in a natural, green leaf.

first-g-ap

A diagram of the first-generation artificial photosynthesis, with its four main steps.

Describing his research, Yang says, “Our system has the potential to fundamentally change the chemical and oil industry in that we can produce chemicals and fuels in a totally renewable way, rather than extracting them from deep below the ground.”

If Yang’s system can be successfully scaled up, businesses could build artificial forests that produce the fuel for our cars, planes, and power plants by following the same laws and processes that natural forests follow. Since artificial photosynthesis would absorb and reduce carbon dioxide in order to create fuels, we could continue to use liquid fuel without destroying the environment or warming the planet.

However, in order to ensure that artificial photosynthesis can reliably produce our fuels in the future, it has to be better than nature, as Ciamician foresaw. Our need for renewable energy is urgent, and Yang’s model must be able to provide energy on a global scale if it is to eventually replace fossil fuels.

Recent Developments in Yang’s Artificial Photosynthesis

Since the major breakthrough in April 2015, Yang has continued to improve his system in hopes of eventually producing fuels that are commercially viable, efficient, and durable.

In August 2015, Yang and his team tested his system with a different type of bacteria. The method is the same, except instead of electrons, the bacteria use molecular hydrogen from water molecules to reduce carbon dioxide and create methane, the primary component of natural gas. This process is projected to have an impressive conversion efficiency of 10%, which is much higher than the conversion efficiency in natural leaves.

A conversion efficiency of 10% could potentially be commercially viable, but since methane is a gas it is more difficult to use than liquid fuels such as butanol, which can be transferred through pipes. Overall, this new generation of PBS needs to be designed and assembled in order to achieve a solar-to-liquid-fuel efficiency above 10%.

second-g-ap

A diagram of this second-generation PBS that produces methane.

In December 2015, Yang advanced his system further by making the remarkable discovery that certain bacteria could grow the semiconductors by themselves. This development short-circuited the two-step process of growing the nanowires and then culturing the bacteria in the nanowires. The improved semiconductor-bacteria interface could potentially be more efficient in producing acetate, as well as other chemicals and fuels, according to Yang. And in terms of scaling up, it has the greatest potential.

third-g-ap

A diagram of this third-generation PBS that produces acetate.

In the past few weeks, Yang made yet another important breakthrough in elucidating the electron transfer mechanism between the semiconductor-bacteria interface. This sort of fundamental understanding of the charge transfer at the interface will provide critical insights for the designing of the next generation PBS with better efficiency and durability. He will be releasing the details of this breakthrough shortly.

Despite these important breakthroughs and modifications to the PBS, Yang clarifies, “the physics of the semiconductor-bacteria interface for the solar driven carbon dioxide reduction is now established.” As long as he has an effective semiconductor that absorbs solar energy and feeds electrons to the bacteria, the photosynthetic function will initiate, and the remarkable process of artificial photosynthesis will continue to produce liquid fuels.

Why This Solar Power Is Unique

Peter Forbes, a science writer and the author of Nanoscience: Giants of the Infinitesimal, admires Yang’s work in creating this system. He writes, “It’s a brilliant synthesis: semiconductors are the most efficient light harvesters, and biological systems are the best scavengers of CO2.”

Yang’s artificial photosynthesis only relies on solar energy. But it creates a more useable source of energy than solar panels, which are currently the most popular and commercially viable form of solar power. While the semiconductors in solar panels absorb solar energy and convert it into electricity, in artificial photosynthesis, the semiconductors absorb solar energy and store it in “the carbon-carbon bond or the carbon-hydrogen bond of liquid fuels like methane or butanol.”

This difference is crucial. The electricity generated from solar panels simply cannot meet our diverse energy needs, but these renewable liquid fuels and natural gases can. Unlike solar panels, Yang’s PBS absorbs and breaks down carbon dioxide, releases oxygen, and creates a renewable fuel that can be collected and used. With artificial photosynthesis creating our fuels, driving cars and operating machinery becomes much less harmful. As Katherine Bourzac phrases nicely, “This is one of the best attempts yet to realize the simple equation: sun + water + carbon dioxide = sustainable fuel.”

The Future of Artificial Photosynthesis

Yang’s PBS has been advancing rapidly, but he still has work to do before the technology can be considered commercially viable. Despite encouraging conversion efficiencies, especially with methane, the PBS is not durable enough or cost-effective enough to be marketable.

In order to improve this system, Yang and his team are working to figure out how to replace bacteria with synthetic catalysts. So far, bacteria have proven to be the most efficient catalysts, and they also have high selectivity—that is, they can create a variety of useful compounds such as butanol, acetate, polymers and methane. But since bacteria live and die, they are less durable than a synthetic catalyst and less reliable if this technology is scaled up.

Yang has been testing PBS’s with live bacteria and synthetic catalysts in parallel systems in order to discover which type works best. “From the point of view of efficiency and selectivity of the final product, the bacteria approach is winning,” Yang says, “but if down the road we can find a synthetic catalyst that can produce methane and butanol with similar selectivity, then that is the ultimate solution.” Such a system would give us the ideal fuels and the most durable semiconductor-catalyst interface that can be reliably scaled up.

Another concern is that, unlike natural photosynthesis, artificial photosynthesis requires concentrated carbon dioxide to function. This is easy to do in the lab, but if artificial photosynthesis is scaled up, Yang will have to find a feasible way of supplying concentrated carbon dioxide to the PBS. Peter Forbes argues that Yang’s artificial photosynthesis could be “coupled with carbon-capture technology to pull COfrom smokestack emissions and convert it into fuel”. If this could be done, artificial photosynthesis would contribute to a carbon-neutral future by consuming our carbon emissions and releasing oxygen. This is not the focus of Yang’s research, but it is an integral piece of the puzzle that other scientists must provide if artificial photosynthesis is to supply the fuels we need on a large scale.

When Giacomo Ciamician considered the future of artificial photosynthesis, he imagined a future of abundant energy where humans could master the “photochemical processes that hitherto have been the guarded secret of the plants…to make them bear even more abundant fruit than nature, for nature is not in a hurry and mankind is.” And while the rush was not apparent to scientists in 1912, it is clear now, in 2016.

Peidong Yang has already created a system of artificial photosynthesis that out-produces nature. If he continues to increase the efficiency and durability of his PBS, artificial photosynthesis could revolutionize our energy use and serve as a sustainable model for generations to come. As long as the sun shines, artificial photosynthesis can produce fuels and consume waste. And in this future of artificial photosynthesis, the world would be able to grow and use fuels freely; knowing that the same, natural process that created them would recycle the carbon at the other end.

Yang shares this hope for the future. He explains, “Our vision of a cyborgian evolution—biology augmented with inorganic materials—may bring the PBS concept to full fruition, selectively combining the best of both worlds, and providing society with a renewable solution to solve the energy problem and mitigate climate change.”

If you would like to learn more about Peidong Yang’s research, please visit his website at http://nanowires.berkeley.edu/.

Elon Musk’s Plan to Colonize Mars

In an announcement to the International Astronautical Congress on Tuesday, Elon Musk unveiled his Interplanetary Transport System (ITS). His goal: allow humans to colonize a city on Mars within the next 50 to 100 years.

Speaking to an energetic crowd in Guadalajara, Mexico, Musk explained that the alternative to staying on Earth, which is at risk of a “doomsday event,” is to “become a spacefaring civilization and a multi-planet species.” As he told Aeon magazine in 2014, “I think there is a strong humanitarian argument for making life multi-planetary in order to safeguard the existence of humanity in the event that something catastrophic were to happen.” Colonizing Mars, he believes, is one of our best options.

In his speech, Musk discussed the details of his transport system. The ITS, developed by SpaceX, would use the most powerful rocket ever built, and at 400 feet tall, it would also be the largest spaceflight system ever created. The spaceship would fit 100-200 people and would feature movie theaters, lecture halls, restaurants, and other fun activities to make the approximately three-month journey enjoyable. “You’ll have a great time,” said Musk.

Musk explained four key issues that must be addressed to make colonization of Mars possible: the rockets need to be fully reusable, they need to be able to refuel in orbit, there must be a way to harness energy on Mars, and we must figure out more efficient ways of traveling. If SpaceX succeeds in meeting these requirements, the rockets could travel to Mars and return to Earth to pick up more colonists for the journey. Musk explained that the same rockets could be used up to a dozen times, bringing more and more people to colonize the Red Planet.

Despite his enthusiasm for the ITS, Musk was careful to acknowledge that there are still many difficulties and obstacles in reaching this goal. Currently, getting to Mars would require an investment of about $10 billion, which is not affordable for most people today. However, Musk thinks that the reusable rocket technology could significantly decrease this cost. “If we can get the cost of moving to Mars to the cost of a median house price in the U.S., which is around $200,000, then I think the probability of establishing a self-sustaining civilization is very high,” Musk noted.

But this viability requires significant investment from both the government and the private sector. Musk explained, “I know there’s a lot of people in the private sector who are interested in helping fund a base on Mars and then perhaps there will be interest on the government sector side to also do that. Ultimately, this is going to be a huge public-private partnership.” This speech, and the attention it has garnered, could help make such investment and cooperation possible.

Many questions remain about how to sustain human life on Mars and whether or not SpaceX can make this technology viable, as even Musk admits. He explained, “This is a huge amount of risk, will cost a lot, and there’s a good chance we don’t succeed. But we’re going to try and do our best. […] What I really want to do here is to make Mars seem possible — make it seem as though it’s something that we could do in our lifetimes, and that you can go.”

Musk’s full speech can be found here.

Former Defense Secretary William Perry Launches MOOC on Nuclear Risks

Today, the danger of some sort of a nuclear catastrophe is greater than it was during the Cold War and most people are blissfully unaware of this danger.” – William J. Perry, 2015

The following description of Dr. Perry’s new MOOC is courtesy of the William J. Perry Project.

Nuclear weapons, far from being historical curiosities, are existential dangers today. But what can you do about this? The first step is to educate yourself on the subject. Now it’s easy to do that in the first free, online course devoted to educating the public about the history and dangers of nuclear weapons. This 10-week course, created by former Secretary of Defense William J. Perry and 10 other highly distinguished educators and public servants is hosted by Stanford University and starts October 4, 2016; sign up now here.

This course has a broad range, from physics to history to politics and diplomacy. You will have the opportunity to obtain a Statement of Accomplishment by passing the appropriate quizzes, but there are no prerequisites other than curiosity and a passion for learning.  Our faculty is an unprecedented group of internationally recognized academic experts, scientists, journalists, political activists, former ambassadors, and former cabinet members from the United States and Russia. Throughout the course you will have opportunities to engage with these faculty members, as well as guest experts and your fellow students from around the world, in weekly online discussions and forums.

In Weeks 1 and 2 you will learn about the creation of the first atomic bomb and the nuclear physics behind these weapons, taught by Dr. Joseph Martz, a physicist at the Los Alamos National Laboratories, and Dr. Siegfried Hecker, former Los Alamos director and a Stanford professor. Drs. Perry, Martz and Hecker describe the early years of the Atomic Age starting from the first nuclear explosion in New Mexico and the atomic bombing of Japan, followed by proliferation of these weapons to the Soviet Union and the beginning of the terrifying nuclear arms race underpinning the Cold War. You also will learn about ICBMs, deterrence and the nuclear triad, nuclear testing, nuclear safety (and the lack of it), the extent and dangers of nuclear proliferation, the connections between nuclear power and nuclear weapons, and the continuing fears about “loose nukes” and unsecured fissile material.

In Weeks 3 and 4 of Living at the Nuclear Brink, Dr. Perry outlines the enormous challenges the United States and its allies faced during the early frightening years of what came to be known as the Cold War. Then Dr. David Holloway, an international expert on the development of the Soviet nuclear program, will lead you on a tour of the Cold War, from its beginnings with Soviet nuclear tests and the Berlin Crisis, the Korean War, the Berlin Wall, and the Cuban Missile Crisis in 1962, probably the closest the world has come to nuclear war. Dr. Holloway will then cover the dangerous years of the late 1970s and early 1980s when détente between the Soviet Union and the West broke down; both sides amassed huge arsenals of nuclear weapons with increasingly sophisticated delivery methods including multiple warheads, and trust was strained with the introduction of short-range ballistic missiles in Europe. Finally, Dr. Holloway and Dr. Perry will describe the fascinating story of how this spiraling international tension was quelled, in part by the new thinking of Gorbachev, and how the Cold War ended with surprising speed and with minimal bloodshed.

In Week 5, you will hear from acclaimed national security journalist Philip Taubman about the remarkable efforts of scientists and engineers in the United States to develop technical methods for filling the gap of knowledge about the nuclear capabilities of the Soviet Union, including spy planes like the U-2 and satellite systems like Corona. In Week 6, you will hear from a recognized expert on nuclear policy, Dr. Scott Sagan of Stanford. Dr. Sagan will explore the theories behind nuclear deterrence and stability; you will learn how this theoretical stability is threatened by proclivities for preventive wars, commitment traps and accidents. You will hear hair-raising stories of accidents, miscalculations and bad intelligence during the Cuban Missile Crisis that that brought the world much closer to a nuclear catastrophe than most people realized.

Weeks 7 and 8 are devoted to exploring the nuclear dangers of today. Dr. Martha Crenshaw, an internationally recognized expert on terrorism, will discuss this topic, and examine the terrifying possibility of the nuclear terrorism. You will see a novel graphic-art video from the William J Perry Project depicting Dr. Perry’s nightmare scenario of a nuclear bomb exploded in Washington, D.C. Week 8 is devoted to current problems of nuclear proliferation. Dr. Hecker gives a first-hand account of the nuclear program in the dangerously unpredictable regime of North Korea, and goes over the fluid situation in Iran. The most dangerous region may be South Asia, where bitter enemies Pakistan and India face off with nuclear weapons. The challenges and possibilities in this confrontation are explored in depth by Dr. Sagan, Dr. Crenshaw, Dr. Hecker, and Dr. Perry; Dr. Andrei Kokoshin, former Russian Deputy Minister of Defense in the 1990s, offers a Russian perspective.

In the final two weeks of Living at the Nuclear Brink, we will explore ways to address the urgent problems of nuclear weapons. Dr. Perry describes the struggles by U.S. administrations to contain these dangers, and highlights some of the success stories, notably the Nunn-Lugar program that led to the dismantling of thousands of nuclear weapons in the former Soviet Union and the United States. James Goodby had a decades long career in the U.S. foreign service; he covers the long and often frustrating history of attempts to limit and control nuclear weapons through treaties and international agreements. Former Secretary of State George Shultz describes the momentous Reykjavik Summit between Presidents Reagan and Gorbachev, in which he participated, and gives his take on the prospects for global security. Finally, you will hear an impassioned plea for active engagement on the nuclear issue by Joseph Cirincione, author and President of the Ploughshares Fund.

Please join us in this exciting and novel online course; we welcome your participation!

For more, watch Gov. Jerry Brown discuss the importance of learning about nuclear weapons, and watch former Secretary of Defense William Perry introduce this MOOC.

The Biggest Companies in AI Partner to Keep AI Safe

Industry leaders in the world of artificial intelligence just announced the Partnership on AI.  This exciting new partnership was “established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”

The partnership is currently co-chaired by Mustafa Suleyman with DeepMind and Eric Horvitz with Microsoft. Other leaders of the partnership include: FLI’s Science Advisory Board Member Francesca Rossi, who is also a research scientist at IBM; Ralf Herbrich with Amazon; Greg Corrado with Google; and Yann LeCun with Facebook.

Though the initial group members were announced yesterday, the collaboration anticipates increased participation, announcing in their press release that “academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization.”

The press release further described the objectives of the new partnership saying:

“AI technologies hold tremendous potential to improve many aspects of life, ranging from healthcare, education, and manufacturing to home automation and transportation. Through rigorous research, the development of best practices, and an open and transparent dialogue, the founding members of the Partnership on AI hope to maximize this potential and ensure it benefits as many people as possible.”

Of the partnership, Rossi said:

“Over the past five years, we’ve seen tremendous advances in the deployment of AI and cognitive computing technologies, ranging from useful consumer apps to transforming some of the world’s most complex industries, including healthcare, financial services, commerce, and the Internet of Things. This partnership will provide consumer and industrial users of cognitive systems a vital voice in the advancement of the defining technology of this century – one that will foster collaboration between people and machines to solve some of the world’s most enduring problems – in a way that is both trustworthy and beneficial.”

Suleyman also said:

“Google and DeepMind strongly support an open, collaborative process for developing AI. This group is a huge step forward, breaking down barriers for AI teams to share best practices, research ways to maximize societal benefits, and tackle ethical concerns, and make it easier for those in other fields to engage with everyone’s work. We’re really proud of how this has come together, and we’re looking forward to working with everyone inside and outside the Partnership on Artificial Intelligence to make sure AI has the broad and transformative impact we all want to see.”

The Partnership on AI also reached out to other members of the AI community for feedback. FLI Science Advisory Board member Nick Bostrom said, “AI is set to have transformative impacts on society over the coming years and decades. It is therefore encouraging that the industry is taking the initiative to create a forum in which technology leaders can share best practices and discuss what it means to be a responsible innovator in this burgeoning field.”

Vicki L. Hanson, President of the Association for Computing Machinery, added:

“The Partnership on AI initiative could not come at a better time. Artificial Intelligence technologies are increasingly becoming part of our daily lives, and AI will significantly impact society in the years ahead. Fostering a shared dialogue and building common cause is crucial. We look forward to working with the Partnership on AI to educate the public and ensure that these technologies serve humanity in beneficial and responsible ways.”

 

The Age of Em: Review and Podcast

Interview with Robin Hanson
A few weeks ago, I had the good fortune to interview Robin Hanson about his new book, The Age of Em. We discussed his book, the future and evolution of humanity, and the research he’s doing for his next book. You can listen to all of that here. And read on for my review of Hanson’s book…

Age of Em Review

As I’ve interviewed more and more people who focus on and worry about the future, an interesting theme keeps popping up: choice. Over and over, scholars and researchers desperately try to remind us that we have a say in our future. The choices we make will impact whether or not our future goes the way we want it to.

But choosing a path for our future isn’t like picking a breakfast cereal. Our options aren’t all in front of us, with detailed information on the side telling us how we may or may not benefit from each choice. Few of us can predict how our decisions will shape our own lives, let alone the future of humanity. That’s where Robin Hanson comes in.

Hanson, a professor of economics at George Mason University, advocates that the activities that will shape our future can be much better informed by what is likely to happen than most of us realize.  He recently wrote the book, The Age of Em: Work, Love and Life when Robots Rule the Earth, which describes one possible path that may lay before us based on current economic and technologic trends.

What Is the Age of Em?

Em is short for emulation — in this case, a brain emulation. In this version of the future, people will choose to have their brains scanned, uploaded and possibly copied, creating a new race of robots and other types of machine intelligence. Because they’re human emulations, Hanson expects ems will think and feel just as we would. However, without biological needs or aging processes, they’ll be much cheaper to run and maintain. And because they can do anything we can – just at a much lower cost – it will be more practical to switch from human to em. Humans who resist this switch or who are too old to be useful as ems will end up on the sidelines of society. When (if) ems take over the world, it will be because humans chose to make the transition.

Interestingly, the timeline for how long ems will rule the world will depend on human versus em perspective. Because ems are essentially machines, they can run at different speeds. Hanson anticipates that over the course of two regular “human” years, most ems will have experienced a thousand years – along with all of the societal changes that come with a thousand years of development. Hanson’s book tells the story of their world: their subsistence lifestyles made glamorous by virtual reality; the em clans comprised of the best and the brightest human minds; and literally, how the ems will work, love, and live.

It’s a very detailed work, and it’s easy to get caught up in the details of which aspects of em life are likely, which details seem unrealistic, and even if ems are more likely than artificial intelligence to take over the world next. And there have been excellent discussions and reviews of the details of the book, like this one at Slate Star Codex. But I’m writing this review almost as much in response to commentary I’ve read about the book as I am about the book itself because there’s another question that’s important to ask as well: Is this the future that we want?

What do we want?

For a book without a plot or characters, it offers a surprisingly engaging and compelling storyline. Perhaps that’s because this is the story of us. It’s the story of humanity — the story of how we progress and evolve. And it’s also the story of how we, as we know ourselves, end.

It’s easy to look at this new world with fear. We’re so focused on production and the bottom line that, in the future, we’ve literally pushed humanity to the sidelines and possibly to extinction. Valuing productivity is fine, but do we really want to take it to this level? Can we stop this from happening, and if so, how?

Do we even want to stop it from happening? Early on Hanson encourages us to remember that people in the past would have been equally horrified by our own current lifestyle. He argues that this future may be different from what we’re used to, but it’s reasonable to expect that humans will prefer transitioning to an em lifestyle in the future. And from that perspective, we can look on this new world with hope.

As I read The Age of Em, I was often reminded of A Brave New World by Aldous Huxley. Huxley described his book as a “negative utopia,” but much of what he wrote has become commonplace and trivial today — mass consumerism, drugs to make us happy, a freer attitude about sex, a preference for mindless entertainment to deep thought. Though many of us may not necessarily consider these attributes of modern society to be a utopia, most people today would choose our current lifestyle over that of the 1930s, and we typically consider our lives better now than at any point in history. Even among people today, we see sharp divides between older generations who are horrified by how much privacy is lost thanks to the Internet and younger generations who see the benefits of increased information and connectivity outweighing any potential risks. Most likely, a similar attitude shift will take place as (if) we move toward a world of ems.

Yet while it’s reasonable to accept that in the future we would likely consider ems to be a positive step for humanity, the questions still remain: Is this what we want, or are we just following along on a path, unable to stop or change directions? Can we really choose our future?

Studying the future

In the book, Hanson says, “If we first look carefully at what is likely to happen if we do nothing, such a no-action baseline can help us to analyze what we might do to change those outcomes. This book, however, only offers the nearest beginnings of such policy analysis.” Hanson looks at where we’ve been, he looks at where we are now, and then he draws lines out into the future to figure out the direction we’ll go. And he does this for every aspect of life. In fact, given that this book is about the future, it also provides considerable insight into who we are now. But it represents only one possible vision for the future.

There are many more people who study history than the future, primarily because we already have information and writings and artifacts about historic events. But though we can’t change the past, we can impact the future. As the famous quote (or paraphrase) by George Santayana goes, “Those who fail to learn history are doomed to repeat it.” So perhaps learning history is only half the story. Perhaps it’s time to reevaluate the prevailing notion that the future is something that can’t be studied.

Only as we work through different possible scenarios for our future, can we better understand how decisions today will impact humanity later on. And with that new information, maybe we can start to make choices that will guide us toward a future we’re all excited about.

Final thoughts

Love it or hate it, agree with it or not, Hanson’s book and his approach to thinking about the future are extremely important for anyone who wants to have a say in the future of humanity. It’s easy to argue over whether or not ems represent the most likely future. It’s just as easy to get lost in the minutia of the em world and debate whether x, y, or z will happen. And these discussions are necessary if we’re to understand what could happen in the future. But to do only that is to miss an important point: something will happen, and we have to decide if we want a role in creating the future or if we want to stand idly by.

I highly recommend The Age of Em, I look forward to Hanson’s next book, and I hope others will answer his call to action and begin studying the future.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

Training Artificial Intelligence to Compromise

Imagine you’re sitting in a self-driving car that’s about to make a left turn into on-coming traffic. One small system in the car will be responsible for making the vehicle turn, one system might speed it up or hit the brakes, other systems will have sensors that detect obstacles, and yet another system may be in communication with other vehicles on the road. Each system has its own goals — starting or stopping, turning or traveling straight, recognizing potential problems, etc. — but they also have to all work together toward one common goal: turning into traffic without causing an accident.

Harvard professor and FLI researcher, David Parkes, is trying to solve just this type of problem. Parkes told FLI, “The particular question I’m asking is: If we have a system of AIs, how can we construct rewards for individual AIs, such that the combined system is well behaved?”

Essentially, an AI within a system of AIs — like that in the car example above — needs to learn how to meet its own objective, as well as how to compromise so that it’s actions will help satisfy the group objective. On top of that, the system of AIs needs to consider the preferences of society. The safety of the passenger in the car or a pedestrian in the crosswalk is a higher priority than turning left.

Training a well-behaved AI

Because environments like a busy street are so complicated, an engineer can’t just program an AI to act in some way to always achieve its objectives. AIs need to learn proper behavior based on a rewards system. “Each AI has a reward for its action and the action of the other AI,” Parkes explained. With the world constantly changing, the rewards have to evolve, and the AIs need to keep up not only with how their own goals change, but also with the evolving objectives of the system as a whole.

The idea of a rewards-based learning system is something most people can likely relate to. Who doesn’t remember the excitement of a gold star or a smiley face on a test? And any dog owner has experienced how much more likely their pet is to perform a trick when it realizes it will get a treat. A reward for an AI is similar.

A technique often used in designing artificial intelligence is reinforcement learning. With reinforcement learning, when the AI takes some action, it receives either positive or negative feedback. And it then tries to optimize its actions to receive more positive rewards. However, the reward can’t just be programmed into the AI. The AI has to interact with its environment to learn which actions will be considered good, bad or neutral. Again, the idea is similar to a dog learning that tricks can earn it treats or praise, but misbehaving could result in punishment.

More than this, Parkes wants to understand how to distribute rewards to subcomponents – the individual AIs – in order to achieve good system-wide behavior. How often should there be positive (or negative) reinforcement, and in reaction to which types of actions?

For example, if you were to play a video game without any points or lives or levels or other indicators of success or failure, you might run around the world killing or fighting aliens and monsters, and you might eventually beat the game, but you wouldn’t know which specific actions led you to win. Instead, games are designed to provide regular feedback and reinforcement so that you know when you make progress and what steps you need to take next. To train an AI, Parkes has to determine which smaller actions will merit feedback so that the AI can move toward a larger, overarching goal.

Rather than programming a reward specifically into the AI, Parkes shapes the way rewards flow from the environment to the AI in order to promote desirable behaviors as the AI interacts with the world around it.

But this is all for just one AI. How do these techniques apply to two or more AIs?

Training a system of AIs

Much of Parkes’ work involves game theory. Game theory helps researchers understand what types of rewards will elicit collaboration among otherwise self-interested players, or in this case, rational AIs. Once an AI figures out how to maximize its own reward, what will entice it to act in accordance with another AI? To answer this question, Parkes turns to an economic theory called mechanism design.

Mechanism design theory is a Nobel-prize winning theory that allows researchers to determine how a system with multiple parts can achieve an overarching goal. It is a kind of “inverse game theory.” How can rules of interaction – ways to distribute rewards, for instance – be designed so individual AIs will act in favor of system-wide and societal preferences? Among other things, mechanism design theory has been applied to problems in auctions, e-commerce, regulations, environmental policy, and now, artificial intelligence.

The difference between Parkes’ work with AIs and mechanism design theory is that the latter requires some sort of mechanism or manager overseeing the entire system. In the case of an automated car or a drone, the AIs within have to work together to achieve group goals, without a mechanism making final decisions. As the environment changes, the external rewards will change. And as the AIs within the system realize they want to make some sort of change to maximize their rewards, they’ll have to communicate with each other, shifting the goals for the entire autonomous system.

Parkes summarized his work for FLI, saying, “The work that I’m doing as part of the FLI grant program is all about aligning incentives so that when autonomous AIs decide how to act, they act in a way that’s not only good for the AI system, but also good for society more broadly.”

Parkes is also involved with the One Hundred Year Study on Artificial Intelligence, and he explained his “research with FLI has informed a broader perspective on thinking about the role that AI can play in an urban context in the near future.” As he considers the future, he asks, “What can we see, for example, from the early trajectory of research and development on autonomous vehicles and robots in the home, about where the hard problems will be in regard to the engineering of value-aligned systems?”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

The Federal Government Updates Biotech Regulations

By Wakanene Kamau

This summer’s GMO labeling bill and the rise of genetic engineering techniques to combat Zika — the virus linked to microcephaly and Guillain-Barre syndrome — have cast new light on how the government ensures public safety.

As researchers and companies scramble to apply the latest advances in synthetic biology, like the gene-editing technique CRISPR, the public has grown increasingly wary of embracing technology that they perceive as a threat to their health or the health of the environment. How, and to what degree, can the drive to develop and deploy new biotechnologies be reconciled with the need to keep the public safe and informed?

Last Friday, the federal government took a big step in framing the debate by releasing two documents that will modernize the 1986 Coordinated Framework for the Regulation of Biotechnology (Coordinated Framework). The Coordinated Framework is the outline for the network of regulatory policies that are used to ensure the safety of biotechnology products.

The Update to the Coordinated Framework, one of the documents released last week, is the first comprehensive review of how the federal government presently regulates biotechnology. It provides case-studies, graphics, and tables to clarify what tools the government uses to make decisions.

The National Strategy for Modernizing the Regulatory System for Biotechnology Products, the second recently released document, provides the long-term vision for how government agencies will handle emerging technologies. It includes oversight by the Food and Drug Administration (FDA), the U.S. Department of Agriculture (USDA), and the Environmental Protection Agency (EPA).

These documents are the result of work than began last summer when the Office of Science and Technology Policy (OSTP) announced a yearlong project to revise the way biotechnology innovations are regulated. The central document, The Coordinated Framework for the Regulation of Biotechnology, was last updated over 20 years ago.

The Coordinated Framework was first issued in 1986 as a response to a new gene-splicing technique that was leaving academic laboratories and entering the marketplace. Researchers had learned to take DNA from multiple sources and splice it together in a process called recombineering. This recombined DNA, known as rDNA, opened the floodgates for new uses that expanded beyond biomedicine and into industries like agriculture and cosmetics.

As researchers saw increasing applications for use in the environment, namely in genetically engineering animals and plants, concerns arose from a variety of stakeholders calling for attention from the federal government. Special interest groups were wary of the effect of commercial rDNA on public and environmental health; outside investors sought assurances that products would be able to legally enter the market; and fledgling biotech companies struggled to navigate regulatory networks.

This tension led the OSTP to develop an interagency effort to outline how to oversee the biotechnology industry. The culmination of this process created a policy framework for how existing legislation would be applied to various kinds of biotechnology. It coordinated across three responsible organizations: the Food and Drug Administration (FDA), the U.S. Department of Agriculture (USDA), and the Environmental Protection Agency (EPA).

Broadly, the FDA regulates genetically modified food and food additives, the USDA oversees genetically modified plants and animals, and the EPA tracks microbial pesticides and engineered algaes. By 1986, the first iteration of the Coordinated Framework was finalized and issued.

The Coordinated Framework was updated in 1992 to more clearly describe the scope of how federal agencies would exercise authority in cases where the established rule of law left room for interpretation. The central premise of the update was to look at the product itself and not the process by which it was made. The OSTP and federal government did not see new biotechnology methods as inherently risky but recognized that their applications could be.

However, since 1992, there have been a number of technologies that have raised new questions on the scope of agency authority. Among these are new methods for new applications, such as bioreactors for the biosynthesis of industrially important chemicals or CRISPR-Cas9 to develop gene drives to combat vector-borne disease.  Researchers are also increasingly using new methods for old applications, such as zinc finger nucleases and transcription activator-like effector nucleases, in addition to CRISPR-Cas9, for genome editing to introduce beneficial traits in crops.

But what kind of risks do these innovations create and how could the Coordinated Framework be used to mitigate them?

In theory, the Coordinated Framework aligns a new innovation with the federal agency that has the most experience working in its respective field. In practice, however, making decisions between agencies with overlapping interests and experience has been difficult.

The recent debate over the review of a genetically modified mosquito developed by the UK-based start-up Oxitec to combat the Zika virus shows how controversial the subject can be. Oxitec’s genetically engineered a male Aedes aegypti mosquito (the host of Zika, along with dengue, yellow fever, and chikungunya viruses) with a gene lethal to offspring it has with wild female mosquitoes. The plan would be to release the genetically engineered male mosquitoes into the wild where they can mate with native female mosquitos and crash the local population.

Using older genetics techniques, this process would have needed approval from the USDA, which has extensive experience with insecticides. However, because the new method is akin to a “new animal drug,” its oversight fell to the FDA. And the FDA created an uproar when it approved field trials of the Oxitec technology in Florida this August.

Confusion and frustration over who is and who should be responsible in cases like this one have brought an end to the 20 year silence on the measure.  In fact, the need to involve a greater amount of clarity, responsibility, and understanding in the regulatory approval process was reaffirmed last year.  The OSTP sent a Memo last summer to the FDA, USDA and EPA announcing the scheduled update to the Coordinated Framework.

Since the Memo was released, the OSTP has organized a series of three “public engagement sessions” (notes available here, here and here) to explain how to the Coordinated Framework presently works, as well as to accept input from the public. The release of the Update to the Coordinated Framework and the National Strategy are two measures of accountability. The Administration will accept feedback on the measures for 40 days following a notice of request for public comment to be published by the Federal Register.

While scientific breakthroughs have the potential to spur wide-ranging innovations, it is important to ensure due respect is given to the potential dangers those innovations present.

You can sign up for updates from the White House on Bioregulation here.

Wakanene is a science writer based in Seattle, Wa. You can reach him on twitter @ws_kamau.

 

Podcast: What Is Our Current Nuclear Risk?

A conversation with Lucas Perry about nuclear risk

Participants:

  • Ariel Conn— Ariel oversees communications and digital media at FLI, and as such, she works closely with members of the nuclear community to help present information about the costs and risks of nuclear weapons.
  • Lucas Perry—Lucas has been actively working with the Mayor and City Council of Cambridge, MA to help them divest from nuclear weapons companies, and he works closely with groups like Don’t Bank on the Bomb to bring more nuclear divestment options to the U.S.

Summary

In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe. (You can find more links to information about these issues at the bottom of the page.)

Transcript

Ariel:  I’m Ariel Conn with the Future of Life Institute, and I’m here with Lucas Perry, also a member of FLI, to talk about the increasing risks of nuclear weapons, and what we can do to decrease those risks.

With the end of the Cold War, and the development of the two new START treaties, we’ve dramatically decreased the number of nuclear weapons around the world. Yet even though there are fewer weapons, they still represent a real and growing threat. In the last few months, FLI has gotten increasingly involved in efforts to decrease the risks of nuclear weapons.

One of the first things people worry about when it comes to decreasing the number of nuclear weapons or altering our nuclear posture is whether or not we can still maintain effective deterrence.

Lucas, can you explain how deterrence works?

Lucas: Sure, deterrence is the idea that to protect ourselves from other nuclear states who might want to harm us through nuclear strikes, if we have our own nuclear weapons primed and ready to be fired, it would deter another nuclear state from firing on us, knowing that we would retaliate with similar, or even more, nuclear force.

Ariel:  OK, and along the same lines, can you explain what hair trigger alert is?

Lucas: Hair trigger alert is a Cold War-era strategy that has nuclear weapons armed and ready for launch within minutes. It ensures mutual and total annihilation, and thus acts as a means of deterrence. But the problem here is that it also increases the likelihood of accidental nuclear war.

Ariel:  Can you explain how an accidental nuclear war could happen? And, also, has it almost happened before?

Lucas: Having a large fraction of our nuclear weapons on hair trigger alert creates the potential for accidental nuclear war through the fallibility of the persons and instruments involved with the launching of nuclear weapons, in junction with the very small amount of time actually needed to fire the nuclear missiles.

Us humans are known to be prone to making mistakes on a daily basis, and we even make the same mistakes multiple times. Computers, radars, and all of the other instruments and technology that go into the launching and detecting of nuclear strikes are intrinsically fallible, as well, as they are prone to breaking and committing error.

So there is the potential for us to fire missiles when an instrument gives us false alarm or a person—say, the President—under the pressure of needing to make a decision within only a few minutes, decides to fire missiles due to some misinterpretation of a situation. This susceptibility to error is actually so great that groups such as the Union of Concerned Scientists have been able to identify at least 21 nuclear close calls where nuclear war was almost started by mistake.

Ariel:  How long does the President actually have to decide whether or not to launch a retaliatory attack?

Lucas: The President actually only has about 12 minutes to decide whether or not to fire our missiles in retaliation. After our radars have detected the incoming missiles, and after this information has been conveyed to the President, there has already been some non-negligible amount of time—perhaps 5 to 15 minutes—where nuclear missiles might already be inbound. So he only has another few minutes—say, about 10 or 12 minutes—to decide whether or not to fire ours in retaliation. But this is also highly contingent upon where the missiles are coming from and how early we detected their launch.

Ariel:  OK, and then do you have any examples off the top of your head of times where we’ve had close calls that almost led to an unnecessary nuclear war?

Lucas: Out of the twenty-or-so nuclear close calls that have been identified by the Union of Concerned Scientists, among other organizations, a few that stand out to me are—for example, in 1980, the Soviet Union launched four submarine-based missiles from near the Kuril Islands as part of a training exercise, which led to a triggering of American early-warning sensors.

And even in 1995, Russian early-warning radar detected a missile launch off the coast of Norway with flight characteristics very similar to that of US submarine missiles. This led to all Russian nuclear forces going into full alert, and even the President at the time got his nuclear football ready and was prepared for full nuclear retaliation. But they ended up realizing that this was just a Norwegian scientific rocket.

These examples really help to illustrate how hair trigger alert is so dangerous. Persons and instruments are inevitably going to make mistakes, and this is only made worse when nuclear weapons are primed and ready to be launched within only minutes.

Ariel:  Going back to deterrence: Do we actually need our nuclear weapons to be on hair trigger alert in order to have effective deterrence?

Lucas: Not necessarily. The current idea is that we keep our intercontinental ballistic missiles (ICBMs), which are located in silos, on hair trigger alert so that these nuclear weapons can be launched before the silos are destroyed by an enemy strike. But warheads can be deployed without being on hair trigger alert, on nuclear submarines and bombers, without jeopardizing national security. If nuclear weapons were to be fired at the United States with the intention of destroying our nuclear missile silos, then we could authorize the launch of our submarine- and bomber-based missiles over the time span of hours and even days. These missiles wouldn’t be able to be intercepted, and would thus offer a means of retaliation, and thus deterrence, without the added danger of being on hair trigger alert.

Ariel:  How many nuclear weapons does the Department of Defense suggest we need to maintain effective deterrence?

Lucas: Studies have shown that only about 300 to 1,000 nuclear weapons are necessary for deterrence. An example of this would be, about 450 of these bombs could be located on submarines and bombers spread out throughout the world, with about another 450 at home on reserve and in silos.

Ariel:  So how many nuclear weapons are there in the US and around the world?

Lucas: There are currently about 15,700 nuclear weapons on this planet. Russia and the US are the main holders of these, with Russia having about 7,500 and the US having about 7,200. Other important nuclear states to note are China, Israel, the UK, North Korea, France, India, and Pakistan.

Ariel:  OK, so basically we have a lot more nuclear weapons than we actually need.

Lucas: Right. If only about 300 to 1,000 are needed for deterrence, then the amount of nuclear weapons on this planet could be exponentially less than it is currently. And the amount that we have right now is actually just blatant overkill. It’s a waste of resources and it increases the risk of accidental nuclear war, making both the countries that have them and the countries that don’t have them, more at risk.

Ariel:  I want to consider this idea of the countries that don’t have them being more at risk. I’m assuming you’re talking about nuclear winter. Can you explain what nuclear winter is?

Lucas: Nuclear winter is an indirect effect of nuclear war. When nuclear weapons go off they create large firestorms from all of the infrastructure, debris, and trees that are set on fire surrounding the point of detonation. These massive firestorms release enormous amounts of soot and smoke into the air that goes into the atmosphere and can block out the sun for months and even years at a time. This drastically reduces the amount of sunlight that is able to get to the Earth, and it thus causes a significant decrease in average global temperatures.

Ariel:  How many nuclear weapons would actually have to go off in order for us to see a significant drop in temperature?

Lucas: About 100 Hiroshima-sized nuclear weapons would decrease average global temperatures by about 1.25 degrees Celsius. When these 100 bombs go off, they would release about 5 million tons of smoke lofted high into the stratosphere. And now, this change of 1.25 degrees Celsius of average global temperatures might seem very tiny, but studies actually show that this will lead to a shortening of growing seasons by up to 30 days and a 10% reduction in average global precipitation. Twenty million people would die directly from the effects of this, but then hundreds of millions of people would die in the following months from a lack of food due to the decrease in average global temperatures and a lack of precipitation.

Ariel:  And that’s hundreds of millions of people around the world, right? Not just in the regions where the war took place?

Lucas: Certainly. The soot and smoke from the firestorms would spread out across the entire planet and be affecting the amount of precipitation and sunlight that everyone receives. It’s not simply that the effects of nuclear war are contained to the countries involved with the nuclear strikes, but rather, potentially the very worst effects of nuclear war create global changes that would affect us all.

Ariel:  OK, so that was for a war between India and Pakistan, which would be small, and it would be using smaller nuclear weapons than what the US and Russia have. So if just an accident were to happen that triggered both the US and Russia to launch their nuclear weapons that are on hair trigger alert, what would the impacts of that be?

Lucas: Well, the United States has about a thousand weapons on hair trigger alert. I’m not exactly sure as to how many there are in Russia, but we can assume that it’s probably a similar amount. So if a nuclear war of about 2,000 weapons were to be exchanged between the United States and Russia, it would cause 510 million tons of smoke to rise into the stratosphere, which would lead to a 4 degrees Celsius change in average global temperatures. And compared to an India-Pakistan conflict, this would lead to catastrophically more casualties from a lack of food and from the direct effects of these nuclear bombs.

Ariel:  And over what sort of time scale is that expected to happen?

Lucas: The effects of nuclear winter, and perhaps even what might one day be nuclear summer, would be lasting over the time span of not just months, but years, even decades.

Ariel:  What’s nuclear summer?

Lucas: So nuclear summer is a more theoretical effect of nuclear war. With nuclear winter you have tons of soot and ash and smoke in the sky blotting out the sun, but additionally, there has actually been an enormous amount of CO2 released from the burning all of the infrastructure and forests and grounds due to the nuclear blasts. After decades, once all of this soot and ash and smoke begin to settle back down onto the Earth’s surface, there will still be this enormous remaining amount of CO2.

So nuclear summer is a hypothetical indirect effect of nuclear war, after nuclear winter, after the soot has fallen down, where there would be a huge spike in average global temperatures due to the enormous amount of CO2 left over from the firestorms.

Ariel: And so how likely is all of this to happen? Is there actually a chance that these types of wars could occur? Or is this mostly something that people are worrying about unnecessarily?

Lucas: The risk of a nuclear war is non-zero. It’s very difficult to quantify exactly what the risks are, but we can say that we have seen at least 21 nuclear close calls where nuclear war was almost started by mistake. And these 21 close calls are actually just the ones that we know about. How many more nuclear close calls have there been that we simply don’t know about, or that governments have been able to keep a secret? We can reflect that as tensions rise between the United States and Russia, and as the risk of terrorism and cyber attack continues to rise, and the conflict between India and Pakistan is continually exacerbated, the threat of nuclear war is actually increasing. It’s not going down.

Ariel:  So there is a risk, and we know that we have more nuclear weapons than we actually need for deterrence. Even if we want to keep enough weapons for deterrence, we don’t need as many as we have. I’m guessing that the government is not going to do anything about this, so what can people do to try to have an impact themselves?

Lucas: A method of engaging with this nuclear issue that has a potentially high efficacy is divesting. We have power as voters, consumers, and producers, but perhaps even more importantly, we have power over what we invest in. We have the power to choose to invest in companies that are socially responsible or ones which are not. So through divestment, we can take money away from nuclear weapons producers. But not only that, we can also work to stigmatize nuclear weapons production and our current nuclear situation through our divestment efforts.

Ariel:  But my understanding is that most of our nuclear weapons are funded by the government, so how would a divestment campaign actually be impactful, given that the money for nuclear weapons wouldn’t necessarily disappear?

Lucas: The most important part of divestment in this area of nuclear weapons is actually the stigmatization. When you see massive amounts of people divesting from something, it creates a lot of light and heat on the subject. It influences the public consciousness and helps to bring back to light this issue of nuclear weapons. And once you have stigmatized something to a critical point, it effectively renders its target politically and socially untenable. Divestment also stimulates new education and research on the topic, while also getting persons invested in the issue.

Ariel:  And so have there been effective campaigns that used divestment in the past?

Lucas: There have been a lot of different campaigns in the past that have used divestment as an effective means of creating important change in the world. A few examples of these are divestment from tobacco, South African apartheid, child labor, and fossil fuels. In all of these instances, persons were divesting from institutions involved in these socially irresponsible acts. Through doing so, they created much stigmatization of these issues, they created capital flight from them, and also created a lot of negative media attention that helped to bring light to these issues and show people the truth of what was going on.

Ariel:  I know FLI was initially inspired by a lot of the work that Don’t Bank on the Bomb has done. Can you talk a bit about some of the work they’ve done and what their success has been?

Lucas: The Don’t Bank on the Bomb campaign has been able to identify direct and indirect investments in nuclear weapons producers, made by large institutions in both Europe and America. Through this they have worked to engage with many banks in Europe to help them to not include these direct or indirect investments in their portfolios and mutual funds, thus helping them to construct socially responsible funds. A few examples of these successes are A&S Bank, ASR, and the Cooperative Bank.

Ariel:  So you’ve been very active with FLI in trying to launch a divestment campaign in the US. I was hoping you could talk a little about the work you’ve done so far and the success you’ve had.

Lucas: Inspired by a lot of the work that’s been done through the Don’t Bank on the Bomb campaign, in junction with resources provided by them, we were able to engage with the city of Cambridge and work with them and help them to divest $1 billion from nuclear weapons-producing companies. As we continue our divestment campaign, we’re really passionate about making the information needed for divestment transparent and open. Currently we’re working on a web app that will allow you to search your mutual fund and see whether not it has direct or indirect investments in nuclear weapons producers. Through doing so, we hope to not only be helping cities and municipalities and institutions divest, but also individuals like you and me.

Ariel:  Lucas, this has been great. Thank you so much for sharing information about the work you’ve been doing so far. If anyone has any questions about how they can divest from nuclear weapons, please email Lucas at lucas@futureoflife.org. You can also check out our new web app at futureoflife.org/invest.

[end of recorded material]

Learn more about nuclear weapons in the 21st Century:

What is hair-trigger alert?

How many nuclear weapons are there and who has them?

What are the consequences of nuclear war?

What would the world look like after a U.S and Russia nuclear war?

How many nukes would it take to make the Earth uninhabitable?

What are the specific effects of nuclear winter?

What can I do to mitigate the risk of nuclear war?

Do we really need so many nuclear weapons on hair-trigger alert?

What sort of new nuclear policy could we adopt?

How can we restructure strategic U.S nuclear forces?

Taking Down the Internet

Imagine the world without Internet. Not what the world was like before Internet, but what would happen in today’s world if the Internet suddenly went down.

How many systems today rely on the Internet to run smoothly? If the Internet were to go down, that would disrupt work, government, financial transactions, communications, shipments, travel, entertainment – nearly every aspect of modern life could be brought to a halt. If someone were able to intentionally take down the Internet, how much damage could they cause?

Cybersecurity expert, Bruce Schneier, recently wrote a post, Someone is Learning to Take Down the Internet, which highlights the increasing number of attacks focused on “probing the defenses of the companies that run critical pieces of the Internet.”

In his post, Schneier explains that someone — he suspects a large nation state like Russia or China — has been systematically testing and probing various large and important Internet companies for weaknesses. He says companies like Verisign, which registers many major web and email addresses, have seen increasing, large-scale attacks against their systems. The attackers are forcing the companies to mount various defenses in response, giving the attackers a better idea of what defense capabilities the companies have and where their defenses may be weak.

Schneier writes, “Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services. […] It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.”

At the moment, there doesn’t appear to be much that can be done about these attacks, but at the very least, as Schneier says, “people should know.”

Read Schneier’s full article here.

MIRI September 2016 Newsletter

Research updates

General updates

News and links

See the original newsletter on MIRI’s website.

Nuclear Weapons and the Myth of the “Re-Alerting Race”

The following article was originally posted on the Union of Concerned Scientists’ blog, The Equation.

One of the frustrations of trying to change policy is that frequently repeated myths can short-circuit careful thinking about current policies and keep policy makers from recognizing better alternatives.

That is particularly frustrating—and dangerous—when the topic is nuclear weapons.

Under current policies, accidental or mistaken nuclear war is more likely than it should be. Given the consequences, that’s a big deal.

We’ve posted previously about the dangers of the US policy of keeping nuclear missiles on hair-trigger alert so that they can be launched quickly in response to warning of attack. There is a surprisingly long list of past incidents in which human and technical errors have led to false warning of attack in both the both US and Soviet Union/Russia—increasing the risk of an accidental nuclear war.

(Source: Dept. of Defense)

Missile launch officers. (Source: Dept. of Defense)

This risk is particularly high in times of tension—and especially during a crisis—since in that case the people in charge are much more likely to interpret false or ambiguous warning as being real.

The main problem here is silo-based missiles (ICBMs), since they are at known locations an adversary could target. The argument goes that launch-on-warning allows the ICBMs to be launched before an incoming attack could destroy them, and that this deters an attack from occurring in the first place.

But deterring an attack does not depend on our land-based missiles. Most of the US nuclear force is at sea, hidden under the ocean in submarines, invulnerable to attack. And since the sub-based missiles can’t be attacked, they are not under the same pressure to launch quickly.

It’s for this reason that the sensible thing to do is to take ICBMs off hair-trigger alert and eliminate options for launching on warning of attack, which would eliminate the possibility of mistaken launches due to false or ambiguous warning. Security experts and high-level military officials agree.

(It’s worth noting that the US does not have a launch-on-warning doctrine, meaning that there is no requirement to launch on warning. But it continues to maintain launch-on-warning as an option, and to do that it needs to keep its ICBMs on hair-trigger alert.)

The myth of the “re-alerting race”

The main reason administration officials give for keeping missiles on alert is the “re-alerting race” and crisis instability. The argument is that if the United States takes its missiles off hair-trigger alert and a crisis starts to brew, it would want to put them back on alert so they would not be vulnerable to an attack. And the act of putting them back on alert—“re-alerting”—could exacerbate the crisis and lead Russia to assume the United States was readying to launch an attack. If Russia had de-alerted its missiles, it would then re-alert them, further exacerbating the crisis. Both countries could have an incentive to act quickly, leading to instability.

This argument gets repeated so often that people assume it’s simply true.

However, the fallacy of this argument is that there is no good reason for the US to re-alert its ICBMs in a crisis. They are not needed for deterrence since, as noted above, deterrence is provided by the submarine force. Moreover, historical incidents have shown that having missiles on alert during a crisis increases the risk of a mistaken launch due to false or ambiguous warning. So having ICBMs on alert in a crisis increases the risk without providing a benefit.

The administration should not just take ICBMs off hair trigger alert. It should also eliminate the option for launching nuclear weapons on warning.

Eliminating launch-on-warning options would mean you do not re-alert the ICBMs in a crisis. With no re-alerting, there is no re-alerting race.

President Obama should act

Obama in Prague, 2009 (Source: Dept of State)

Obama in Prague, 2009 (Source: Dept of State)

Maybe administration officials have not thought about this as carefully as they should—although, hopefully, a key policy change that would reduce the risk of accidental nuclear war is not being rejected because of sloppy thinking.

Maybe the real reason is simply inertia in the system. The president’s 2009 speech in Prague showed he is willing to think outside the box on these issues to reduce the risk of nuclear catastrophe. So maybe it’s his advisors who are not willing to take such a step.

In that case, he should listen to the words of Gen. Eugene Habiger, former Commander in Chief of U.S. Strategic Command—the man in charge of US nuclear weapons. Earlier this year, he said:

We need to bring the alert status down of our ICBMs. And we’ve been dealing with that for many, many decades. … It’s one of those things where the services are not gonna do anything until the Big Kahuna says, “Take your missiles off alert,” and then by golly within hours the missiles and subs will be off alert.

The Big Kahuna is president until January 20, 2017. Hopefully he will get beyond the myth that has frozen sensible action on this issue, and take the sensible step of ending launch-on-warning.

Success for Cluster Munitions Divestment

“Great news!” said Don’t Bank on the Bomb’s Susi Snyder in a recent blog post, “American company Textron has announced it will end its involvement with cluster munitions.”

This decision marks a major success for those who have pushed for a cluster munition divestment in an effort to stigmatize the weapons and the companies that create them. As Snyder explained later in her article:

“PAX and campaigners active in the Stop Explosive Investments campaign have engaged tirelessly with many investors over the years to urge them to cease their financial support of Textron. This article’s analysis suggests that  pressure from the financial sector has had an effect:

‘A Couple of Hidden Positives: On the surface, yesterday’s announcement seems like a non-event, but we come away with two observations that we think investors shouldn’t overlook. First off, we note that SFW served as a product that limited the “ownability” of TXT shares among foreign investment funds, due largely to interpretations of where TXT stood vis-a-vis international weapons treaties. Arguably, the discontinuation of this product line could expand the addressable investor base for TXT shares by a material amount (i.e. most of Europe), in an industrial vertical (A&D) where investable choices are slim but performance has been strong over the years.’”

Stop Explosive Investments wrote a more detailed post about Textron’s announcement:

“US company Textron announced it will end its involvement with cluster munitions. It produced the Sensor Fuzed Weapon (SFW), which is banned under the 2008 Convention on Cluster Munitions (CCM). This good news comes a few days before the Sixth Meeting of States Parties to the Convention on Cluster Munitions in Geneva next week.

“Over the years, CMC-member PAX has identified Textron as a cluster munition producer in the  “Worldwide Investments in Cluster Munitions; a shared responsibility” report. The 2016 update revealed that worldwide, 49 financial institutions had  financial ties to Textron, with a total of US$12370,83 million invested.

“‘Campaigners active in the Stop Explosive Investments campaign have engaged tirelessly with many investors over the years to urge them to cease their financial support of Textron’,  says Megan Burke, director of the Cluster Munition Coalition. ‘The company’s decision to end their cluster munition production is a great success for all of us working for a world free of cluster munitions.’

“Research by Human Rights Watch and Amnesty International showed that Textron’s Sensor Fuzed Weapons were used in Yemen by the Saudi-led coalition. On 27 May 2016, the United States government blocked the transfer of these Sensor Fuzed Weapons to Saudi Arabia because of concern at the use of cluster munitions in or near civilian areas. Now, Textron decided to end the production of these weapons all together. The company cites a decline in orders and ‘the current political climate’ as motivation, an indication that the CCM is the global norm and that the stigma associated with cluster bombs is ever-growing.

“Pressure from the financial sector has likely also impacted this decision. As a financial analyst explains in this article: ‘[…] interpretations of where Textron stood vis-a-vis international weapons treaties’ meant many (European) investors had excluded the company from their investment universe. Suzanne Oosterwijk from PAX: ‘Such exclusions send a clear message to companies that they are not acceptable business partners as long as they are involved in the production of cluster munitions.’

“Since the launch of the Stop Explosive Investments campaign dozens of financial institutions have installed policies to disinvest from cluster munition producers, and 10 states have legislation to prohibit such investments.

“On Tuesday 6 September during the Sixth Meeting of States Parties, the CMC and PAX will hold a side event on disinvestment form cluster munitions and will urge more countries to ban investments in cluster munitions producers.”

The success seen from cluster munitions divestment provides further evidence that divestment is an effective means of impacting company decisions. This an encouraging announcement for those hoping to decrease the world’s nuclear weapons via divestment.