The U.S. Worldwide Threat Assessment Includes Warnings of Cyber Attacks, Nuclear Weapons, Climate Change, etc.

Last Thursday – just one day before the WannaCry ransomware attack would shut down 16 hospitals in the UK and ultimately hit hundreds of thousands of organizations and individuals in over 150 countries – the Director of National Intelligence, Daniel Coats, released the Worldwide Threat Assessment of the US Intelligence Community.

Large-scale cyber attacks are among the first risks cited in the document, which warns that “cyber threats also pose an increasing risk to public health, safety, and prosperity as cyber technologies are integrated with critical infrastructure in key sectors.”

Perhaps the other most prescient, or at least well-timed, warning in the document relates to North Korea’s ambitions to create nuclear intercontinental ballistic missiles (ICBMs). Coats writes:

“Pyongyang is committed to developing a long-range, nuclear-armed missile that is capable of posing a direct threat to the United States; it has publicly displayed its road-mobile ICBMs on multiple occasions. We assess that North Korea has taken steps toward fielding an ICBM but has not flight-tested it.”

This past Sunday, North Korea performed a missile test launch, which many experts believe shows considerable progress toward the development of an ICBM. Though the report hints this may be less of an actual threat from North Korea and more for show. “We have long assessed that Pyongyang’s nuclear capabilities are intended for deterrence, international prestige, and coercive diplomacy,” says Coats in the report.

More Nuclear Threats

The Assessment also addresses the potential of nuclear threats from China and Pakistan. China “continues to modernize its nuclear missile force by adding more survivable road-mobile systems and enhancing its silo-based systems. This new generation of missiles is intended to ensure the viability of China’s strategic deterrent by providing a second-strike capability.” In addition, China is also working to develop “its first long-range, sea-based nuclear capability.”

Meanwhile, though Pakistan’s nuclear program doesn’t pose a direct threat to the U.S., advances in Pakistan’s nuclear capabilities could risk further destabilization along the India-Pakistan border.

The report warns: “Pakistan’s pursuit of tactical nuclear weapons potentially lowers the threshold for their use.” And of the ongoing conflicts between Pakistan and India, it says, “Increasing numbers of firefights along the Line of Control, including the use of artillery and mortars, might exacerbate the risk of unintended escalation between these nuclear-armed neighbors.”

This could be especially problematic because “early deployment during a crisis of smaller, more mobile nuclear weapons would increase the amount of time that systems would be outside the relative security of a storage site, increasing the risk that a coordinated attack by non-state actors might succeed in capturing a complete nuclear weapon.”

Even a small nuclear war between India and Pakistan could trigger a nuclear winter that could send the planet into a mini ice age and starve an estimated 1 billion people.

Artificial Intelligence

Nukes aren’t the only weapons the government is worried about. The report also expresses concern about the impact of artificial intelligence on weaponry: “Artificial Intelligence (Al) is advancing computational capabilities that benefit the economy, yet those advances also enable new military capabilities for our adversaries.”

Coats worries that AI could negatively impact other aspects of society, saying, “The implications of our adversaries’ abilities to use AI are potentially profound and broad. They include an increased vulnerability to cyber attack, difficulty in ascertaining attribution, facilitation of advances in foreign weapon and intelligence systems, the risk of accidents and related liability issues, and unemployment.”

Space Warfare

But threats of war are not expected to remain Earth-bound. The Assessment also addresses issues associated with space warfare, which could put satellites and military communication at risk.

For example, the report warns that “Russia and China perceive a need to offset any US military advantage derived from military, civil, or commercial space systems and are increasingly considering attacks against satellite systems as part of their future warfare doctrine.”

The report also adds that “the global threat of electronic warfare (EW) attacks against space systems will expand in the coming years in both number and types of weapons.” Coats expects that EW attacks will “focus on jamming capabilities against dedicated military satellite communications” and against GPS, among others.

Environmental Risks & Climate Change

Plenty of global threats do remain Earth-bound though, and not all are directly related to military concerns. The government also sees environmental issues and climate change as potential threats to national security.

The report states, “The trend toward a warming climate is forecast to continue in 2017. … This warming is projected to fuel more intense and frequent extreme weather events that will be distributed unequally in time and geography. Countries with large populations in coastal areas are particularly vulnerable to tropical weather events and storm surges, especially in Asia and Africa.”

In addition to rising temperatures, “global air pollution is worsening as more countries experience rapid industrialization, urbanization, forest burning, and agricultural waste incineration, according to the World Health Organization (WHO). An estimated 92 percent of the world’s population live in areas where WHO air quality standards are not met.”

According to the Assessment, biodiversity loss will also continue to pose an increasing threat to humanity. The report suggests global biodiversity “will likely continue to decline due to habitat loss, overexploitation, pollution, and invasive species, … disrupting ecosystems that support life, including humans.”

The Assessment goes on to raise concerns about the rate at which biodiversity loss is occurring. It says, “Since 1970, vertebrate populations have declined an estimated 60 percent … [and] populations in freshwater systems declined more than 80 percent. The rate of species loss worldwide is estimated at 100 to 1,000 times higher than the natural background extinction rate.”

Other Threats

The examples above are just a sampling of the risks highlighted in the Assessment. A great deal of the report covers threats of terrorism, issues with Russia, China and other regional conflicts, organized crime, economics, and even illegal fishing. Overall, the report is relatively accessible and provides a quick summary of the greatest known risks that could threaten not only the U.S., but also other countries in 2017. You can read the report in its entirety here.

Podcast: Climate Change with Brian Toon and Kevin Trenberth

Too often, the media focus their attention on climate-change deniers, and as a result, when scientists speak with the press, it’s almost always a discussion of whether climate change is real. Unfortunately, that can make it harder for those who recognize that climate change is a legitimate threat to fully understand the science and impacts of rising global temperatures.

I recently visited the National Center for Atmospheric Research in Boulder, CO and met with climate scientists Dr. Kevin Trenberth and CU Boulder’s Dr. Brian Toon to have a different discussion. I wanted better answers about what climate change is, what its effects could be, and how can we prepare for the future.

The discussion that follows has been edited for clarity and brevity, and I’ve added occasional comments for context. You can also listen to the podcast above or read the full transcript here for more in-depth insight into these issues.

Our discussion began with a review of the scientific evidence behind climate change.

Trenberth: “The main source of human-induced climate change is from increasing carbon dioxide and other greenhouse gases in the atmosphere. And we have plenty of evidence that we’re responsible for the over 40% increase in carbon dioxide concentrations in the atmosphere since pre-industrial times, and more than half of that has occurred since 1980.”

Toon: “I think the problem is that carbon dioxide is rising proportional to population on the Earth. If you just plot carbon dioxide in the last few decades versus global population, it tracks almost exactly. In coming decades, we’re increasing global population by a million people a week. That’s a new city in the world of a million people every week somewhere, and the amount of energy that’s already committed to supporting this increasing population is very large.”

The financial cost of climate change is also quite large.

Trenberth: “2012 was the warmest year on record in the United States. There was a very widespread drought that occurred, starting here in Colorado, in the West. The drought itself was estimated to cost about $75 billion. Superstorm Sandy is a different example, and the damages associated with that are, again, estimated to be about $75 billion. At the moment, the cost of climate and weather related disasters is something like $40 billion a year.”

We discussed possible solutions to climate change, but while solutions exist, it was easy to get distracted by just how large – and deadly — the problem truly is.

Toon: “Technologically, of course, there are lots of things we can do. Solar energy and wind energy are both approaching or passing the cost of fossil fuels, so they’re advantageous. [But] there’s other aspects of this like air pollution, for example, which comes from burning a lot of fossil fuels. It’s been estimated to kill seven million people a year around the Earth. Particularly in countries like China, it’s thought to be killing about a million people a year. Even in the United States, it’s causing probably 10,000 or more deaths a year.”

Unfortunately, Toon may be underestimating the number of US deaths resulting from air pollution. A 2013 study out of MIT found that air pollution causes roughly 200,000 early deaths in the US each year. And there’s still the general problem that carbon in the atmosphere (not the same as air pollution) really isn’t something that will go away anytime soon.

Toon: “Carbon dioxide has a very, very long lifetime. Early IPCC reports would often say carbon dioxide has a lifetime of 50 years. Some people interpreted that to mean it’ll go away in 50 years, but what it really meant was that it would go into equilibrium with the oceans in about 50 years. When you go somewhere in your car, about 20% of that carbon dioxide that is released to the atmosphere is still going to be there in thousands of years. The CO2 has lifetimes of thousands and thousands of years, maybe tens or hundreds of thousands of years. It’s not reversible.”

Trenberth: “Every springtime, the trees take up carbon dioxide and there’s a draw-down of carbon dioxide in the atmosphere, but then, in the fall, the leaves fall on the forest floor and the twigs and branches and so on, and they decay and they put carbon dioxide back into the atmosphere. People talk about growing more trees, which can certainly take carbon dioxide out of the atmosphere to some extent, but then what do you do with all the trees? That’s part of the issue. Maybe you can bury some of them somewhere, but it’s very difficult. It’s not a full solution to the problem.”

Toon: “The average American uses the equivalent of about five tons of carbon a year – that’s an elephant or two. That means every year you have to go out in your backyard and bury an elephant or two.”

We know that climate change is expected to impact farming and sea levels. And we know that the temperature changes and increasing ocean acidification could cause many species to go extinct. But for the most part, scientists aren’t worried that climate change alone could cause the extinction of humanity. However, as a threat multiplier – that is, something that triggers other problems – climate change could lead to terrible famines, pandemics, and war. And some of this may already be underway.

Trenberth: “You don’t actually have to go a hundred years or a thousand years into the future before things can get quite disrupted relative to today. You can see some signs of that if you look around the world now. There’s certainly studies that have suggested that the changes in climate, and the droughts that occur and the wildfires and so on are already extra stressors on the system and have exacerbated wars in Sudan and in Syria. It’s one of the things which makes it very worrying for security around the world to the defense department, to the armed services, who are very concerned about the destabilizing effects of climate change around the world.”

Some of the instabilities around the world today are already leading to discussion about the possibility of using nuclear weapons. But too many nuclear weapons could trigger the “other” climate change: nuclear winter.

Toon: “Nuclear winter is caused by burning cities. If there were a nuclear war in which cities were attacked then the smoke that’s released from all those fires can go into the stratosphere and create a veil of soot particles in the upper atmosphere, which are very good at absorbing sunlight. It’s sort of like geoengineering in that sense; it reduces the temperature of the planet. Even a little war between India and Pakistan, for example — which, incidentally, have about 400 nuclear weapons between them at the moment — if they started attacking each other’s cities, the smoke from that could drop the temperature of the Earth back to preindustrial conditions. In fact, it’d be lower than anything we’ve seen in the climate record since the end of the last ice age, which would be devastating to mid-latitude agriculture.

“This is an issue people don’t really understand: the world food storage is only about 60 days. There’s not enough food on that planet to feed the population for more than 60 days. There’s only enough food in an average city to feed the city for about a week. That’s the same kind of issue that we’re coming to also with the changes in agriculture that we might face in the next century just from global warming. You have to be able to make up those food losses by shipping food from some other place. Adjusting to that takes a long time.”

Concern about our ability to adjust was a common theme. Climate change is occurring so rapidly that it will be difficult for all species, even people, to adapt quickly enough.

Trenberth: “We’re way behind in terms of what is needed because if you start really trying to take serious action on this, there’s a built-in delay of 20 or 30 years because of the infrastructure that you have in order to change that around. Then there’s another 20-year delay because the oceans respond very, very slowly. If you start making major changes now, you end up experiencing the effects of those changes maybe 40 years from now or something like that. You’ve really got to get ahead of this.

“The atmosphere is a global commons. It belongs to everyone. The air that’s over the US, a week later is over in Europe, and a week later it’s over China, and then a week later it’s back over the US again. If we dump stuff into the atmosphere, it gets shared among all of the nations.”

Toon: “Organisms are used to evolving and compensating for things, but not on a 40-year timescale. They’re used to slowly evolving and slowly responding to the environment, and here they’re being forced to respond very quickly. That’s an extinction problem. If you make a sudden change in the environment, you can cause extinctions.”

As dire as the situation might seem, there are still ways in which we can address climate change.

Toon: “I’m hopeful, at the local level, things will happen, I’m hopeful that money will be made out of converting to other energy systems, and that those things will move us forward despite the inability, apparently, of politicians to deal with things.”

Trenberth: “The real way of doing this is probably to create other kinds of incentives such as through a carbon tax, as often referred to, or a fee on carbon of some sort, which recognizes the downstream effects of burning coal both in terms of air pollution and in terms of climate change that’s currently not built into the cost of burning coal, and it really ought to be.”

Toon: “[There] is not really a question anymore about whether climate change is occurring or not. It certainly is occurring. However, how do you respond to that? What do you do? At least in the United States, it’s very clear that we’re a capitalistic society, and so we need to make it economically advantageous to develop these new energy technologies. I suspect that we’re going to see the rise of China and Asia in developing renewable energy and selling that throughout the world for the reason that it’s cheaper and they’ll make money out of it. [And] we’ll wake up behind the curve.”

Why 2016 Was Actually a Year of Hope

Just about everyone found something to dislike about 2016, from wars to politics and celebrity deaths. But hidden within this year’s news feeds were some really exciting news stories. And some of them can even give us hope for the future.

Artificial Intelligence

Though concerns about the future of AI still loom, 2016 was a great reminder that, when harnessed for good, AI can help humanity thrive.

AI and Health

Some of the most promising and hopefully more immediate breakthroughs and announcements were related to health. Google’s DeepMind announced a new division that would focus on helping doctors improve patient care. Harvard Business Review considered what an AI-enabled hospital might look like, which would improve the hospital experience for the patient, the doctor, and even the patient’s visitors and loved ones. A breakthrough from MIT researchers could see AI used to more quickly and effectively design new drug compounds that could be applied to a range of health needs.

More specifically, Microsoft wants to cure cancer, and the company has been working with research labs and doctors around the country to use AI to improve cancer research and treatment. But Microsoft isn’t the only company that hopes to cure cancer. DeepMind Health also partnered with University College London’s hospitals to apply machine learning to diagnose and treat head and neck cancers.

AI and Society

Other researchers are turning to AI to help solve social issues. While AI has what is known as the “white guy problem” and examples of bias cropped up in many news articles, Fei Fei Li has been working with STEM girls at Stanford to bridge the gender gap. Stanford researchers also published research that suggests  artificial intelligence could help us use satellite data to combat global poverty.

It was also a big year for research on how to keep artificial intelligence safe as it continues to develop. Google and the Future of Humanity Institute made big headlines with their work to design a “kill switch” for AI. Google Brain also published a research agenda on various problems AI researchers should be studying now to help ensure safe AI for the future.

Even the White House got involved in AI this year, hosting four symposia on AI and releasing reports in October and December about the potential impact of AI and the necessary areas of research. The White House reports are especially focused on the possible impact of automation on the economy, but they also look at how the government can contribute to AI safety, especially in the near future.

AI in Action

And of course there was AlphaGo. In January, Google’s DeepMind published a paper, which announced that the company had created a program, AlphaGo, that could beat one of Europe’s top Go players. Then, in March, in front of a live audience, AlphaGo beat the reigning world champion of Go in four out of five games. These results took the AI community by surprise and indicate that artificial intelligence may be progressing more rapidly than many in the field realized.

And AI went beyond research labs this year to be applied practically and beneficially in the real world. Perhaps most hopeful was some of the news that came out about the ways AI has been used to address issues connected with pollution and climate change. For example, IBM has had increasing success with a program that can forecast pollution in China, giving residents advanced warning about days of especially bad air. Meanwhile, Google was able to reduce its power usage by using DeepMind’s AI to manipulate things like its cooling systems.

And speaking of addressing climate change…

Climate Change

With recent news from climate scientists indicating that climate change may be coming on faster and stronger than previously anticipated and with limited political action on the issue, 2016 may not have made climate activists happy. But even here, there was some hopeful news.

Among the biggest news was the ratification of the Paris Climate Agreement. But more generally, countries, communities and businesses came together on various issues of global warming, and Voices of America offers five examples of how this was a year of incredible, global progress.

But there was also news of technological advancements that could soon help us address climate issues more effectively. Scientists at Oak Ridge National Laboratory have discovered a way to convert CO2 into ethanol. A researcher from UC Berkeley has developed a method for artificial photosynthesis, which could help us more effectively harness the energy of the sun. And a multi-disciplinary team has genetically engineered bacteria that could be used to help combat global warming.


Biotechnology — with fears of designer babies and manmade pandemics – is easily one of most feared technologies. But rather than causing harm, the latest biotech advances could help to save millions of people.


In the course of about two years, CRISPR-cas9 went from a new development to what could become one of the world’s greatest advances in biology. Results of studies early in the year were promising, but as the year progressed, the news just got better. CRISPR was used to successfully remove HIV from human immune cells. A team in China used CRISPR on a patient for the first time in an attempt to treat lung cancer (treatments are still ongoing), and researchers in the US have also received approval to test CRISPR cancer treatment in patients. And CRISPR was also used to partially restore sight to blind animals.

Gene Drive

Where CRISPR could have the most dramatic, life-saving effect is in gene drives. By using CRISPR to modify the genes of an invasive species, we could potentially eliminate the unwelcome plant or animal, reviving the local ecology and saving native species that may be on the brink of extinction. But perhaps most impressive is the hope that gene drive technology could be used to end mosquito- and tick-borne diseases, such as malaria, dengue, Lyme, etc. Eliminating these diseases could easily save over a million lives every year.

Other Biotech News

The year saw other biotech advances as well. Researchers at MIT addressed a major problem in synthetic biology in which engineered genetic circuits interfere with each other. Another team at MIT engineered an antimicrobial peptide that can eliminate many types of bacteria, including some of the antibiotic-resistant “superbugs.” And various groups are also using CRISPR to create new ways to fight antibiotic-resistant bacteria.

Nuclear Weapons

If ever there was a topic that does little to inspire hope, it’s nuclear weapons. Yet even here we saw some positive signs this year. The Cambridge City Council voted to divest their $1 billion pension fund from any companies connected with nuclear weapons, which earned them an official commendation from the U.S. Conference of Mayors. In fact, divestment may prove a useful tool for the general public to express their displeasure with nuclear policy, which will be good, since one cause for hope is that the growing awareness of the nuclear weapons situation will help stigmatize the new nuclear arms race.

In February, Londoners held the largest anti-nuclear rally Britain had seen in decades, and the following month MinutePhysics posted a video about nuclear weapons that’s been seen by nearly 1.3 million people. In May, scientific and religious leaders came together to call for steps to reduce nuclear risks. And all of that pales in comparison to the attention the U.S. elections brought to the risks of nuclear weapons.

As awareness of nuclear risks grows, so do our chances of instigating the change necessary to reduce those risks.

The United Nations Takes on Weapons

But if awareness alone isn’t enough, then recent actions by the United Nations may instead be a source of hope. As October came to a close, the United Nations voted to begin negotiations on a treaty that would ban nuclear weapons. While this might not have an immediate impact on nuclear weapons arsenals, the stigmatization caused by such a ban could increase pressure on countries and companies driving the new nuclear arms race.

The U.N. also announced recently that it would officially begin looking into the possibility of a ban on lethal autonomous weapons, a cause that’s been championed by Elon Musk, Steve Wozniak, Stephen Hawking and thousands of AI researchers and roboticists in an open letter.

Looking Ahead

And why limit our hope and ambition to merely one planet? This year, a group of influential scientists led by Yuri Milner announced an Alpha-Centauri starshot, in which they would send a rocket of space probes to our nearest star system. Elon Musk later announced his plans to colonize Mars. And an MIT scientist wants to make all of these trips possible for humans by using CRISPR to reengineer our own genes to keep us safe in space.

Yet for all of these exciting events and breakthroughs, perhaps what’s most inspiring and hopeful is that this represents only a tiny sampling of all of the amazing stories that made the news this year. If trends like these keep up, there’s plenty to look forward to in 2017.

Podcast: FLI 2016 – A Year In Review

For FLI, 2016 was a great year, full of our own success, but also great achievements from so many of the organizations we work with. Max, Meia, Anthony, Victoria, Richard, Lucas, David, and Ariel discuss what they were most excited to see in 2016 and what they’re looking forward to in 2017.

AGUIRRE: I’m Anthony Aguirre. I am a professor of physics at UC Santa Cruz, and I’m one of the founders of the Future of Life Institute.

STANLEY: I’m David Stanley, and I’m currently working with FLI as a Project Coordinator/Volunteer Coordinator.

PERRY: My name is Lucas Perry, and I’m a Project Coordinator with the Future of Life Institute.

TEGMARK: I’m Max Tegmark, and I have the fortune to be the President of the Future of Life Institute.

CHITA-TEGMARK: I’m Meia Chita-Tegmark, and I am a co-founder of the Future of Life Institute.

MALLAH: Hi, I’m Richard Mallah. I’m the Director of AI Projects at the Future of Life Institute.

KRAKOVNA: Hi everyone, I am Victoria Krakovna, and I am one of the co-founders of FLI. I’ve recently taken up a position at Google DeepMind working on AI safety.

CONN: And I’m Ariel Conn, the Director of Media and Communications for FLI. 2016 has certainly had its ups and downs, and so at FLI, we count ourselves especially lucky to have had such a successful year. We’ve continued to progress with the field of AI safety research, we’ve made incredible headway with our nuclear weapons efforts, and we’ve worked closely with many amazing groups and individuals. On that last note, much of what we’ve been most excited about throughout 2016 is the great work these other groups in our fields have also accomplished.

Over the last couple of weeks, I’ve sat down with our founders and core team to rehash their highlights from 2016 and also to learn what they’re all most looking forward to as we move into 2017.

To start things off, Max gave a summary of the work that FLI does and why 2016 was such a success.

TEGMARK: What I was most excited by in 2016 was the overall sense that people are taking seriously this idea – that we really need to win this race between the growing power of our technology and the wisdom with which we manage it. Every single way in which 2016 is better than the Stone Age is because of technology, and I’m optimistic that we can create a fantastic future with tech as long as we win this race. But in the past, the way we’ve kept one step ahead is always by learning from mistakes. We invented fire, messed up a bunch of times, and then invented the fire extinguisher. We at the Future of Life Institute feel that that strategy of learning from mistakes is a terrible idea for more powerful tech, like nuclear weapons, artificial intelligence, and things that can really alter the climate of our globe.

Now, in 2016 we saw multiple examples of people trying to plan ahead and to avoid problems with technology instead of just stumbling into them. In April, we had world leaders getting together and signing the Paris Climate Accords. In November, the United Nations General Assembly voted to start negotiations about nuclear weapons next year. The question is whether they should actually ultimately be phased out; whether the nations that don’t have nukes should work towards stigmatizing building more of them – with the idea that 14,000 is way more than anyone needs for deterrence. And – just the other day – the United Nations also decided to start negotiations on the possibility of banning lethal autonomous weapons, which is another arms race that could be very, very destabilizing. And if we keep this positive momentum, I think there’s really good hope that all of these technologies will end up having mainly beneficial uses.

Today, we think of our biologist friends as mainly responsible for the fact that we live longer and healthier lives, and not as those guys who make the bioweapons. We think of chemists as providing us with better materials and new ways of making medicines, not as the people who built chemical weapons and are all responsible for global warming. We think of AI scientists as – I hope, when we look back on them in the future – as people who helped make the world better, rather than the ones who just brought on the AI arms race. And it’s very encouraging to me that as much as people in general – but also the scientists in all these fields – are really stepping up and saying, “Hey, we’re not just going to invent this technology, and then let it be misused. We’re going to take responsibility for making sure that the technology is used beneficially.”

CONN: And beneficial AI is what FLI is primarily known for. So what did the other members have to say about AI safety in 2016? We’ll hear from Anthony first.

AGUIRRE: I would say that what has been great to see over the last year or so is the AI safety and beneficiality research field really growing into an actual research field. When we ran our first conference a couple of years ago, they were these tiny communities who had been thinking about the impact of artificial intelligence in the future and in the long-term future. They weren’t really talking to each other; they weren’t really doing much actual research – there wasn’t funding for it. So, to see in the last few years that transform into something where it takes a massive effort to keep track of all the stuff that’s being done in this space now. All the papers that are coming out, the research groups – you sort of used to be able to just find them all, easily identified. Now, there’s this huge worldwide effort and long lists, and it’s difficult to keep track of. And that’s an awesome problem to have.

As someone who’s not in the field, but sort of watching the dynamics of the research community, that’s what’s been so great to see. A research community that wasn’t there before really has started, and I think in the past year we’re seeing the actual results of that research start to come in. You know, it’s still early days. But it’s starting to come in, and we’re starting to see papers that have been basically created using these research talents and the funding that’s come through the Future of Life Institute. It’s been super gratifying. And seeing that it’s a fairly large amount of money – but fairly small compared to the total amount of research funding in artificial intelligence or other fields – but because it was so funding-starved and talent-starved before, it’s just made an enormous impact. And that’s been nice to see.

CONN: Not surprisingly, Richard was equally excited to see AI safety becoming a field of ever-increasing interest for many AI groups.

MALLAH: I’m most excited by the continued mainstreaming of AI safety research. There are more and more publications coming out by places like DeepMind and Google Brain that have really lent additional credibility to the space, as well as a continued uptake of more and more professors, and postdocs, and grad students from a wide variety of universities entering this space. And, of course, OpenAI has come out with a number of useful papers and resources.

I’m also excited that governments have really realized that this is an important issue. So, while the White House reports have come out recently focusing more on near-term AI safety research, they did note that longer-term concerns like superintelligence are not necessarily unreasonable for later this century. And that they do support – right now – funding safety work that can scale toward the future, which is really exciting. We really need more funding coming into the community for that type of research. Likewise, other governments – like the U.K. and Japan, Germany – have all made very positive statements about AI safety in one form or another. And other governments around the world.

CONN: In addition to seeing so many other groups get involved in AI safety, Victoria was also pleased to see FLI taking part in so many large AI conferences.

KRAKOVNA: I think I’ve been pretty excited to see us involved in these AI safety workshops at major conferences. So on the one hand, our conference in Puerto Rico that we organized ourselves was very influential and helped to kick-start making AI safety more mainstream in the AI community. On the other hand, it felt really good in 2016 to complement that with having events that are actually part of major conferences that were co-organized by a lot of mainstream AI researchers. I think that really was an integral part of the mainstreaming of the field. For example, I was really excited about the Reliable Machine Learning workshop at ICML that we helped to make happen. I think that was something that was quite positively received at the conference, and there was a lot of good AI safety material there.

CONN: And of course, Victoria was also pretty excited about some of the papers that were published this year connected to AI safety, many of which received at least partial funding from FLI.

KRAKOVNA: There were several excellent papers in AI safety this year, addressing core problems in safety for machine learning systems. For example, there was a paper from Stuart Russell’s lab published at NIPS, on cooperative IRL. This is about teaching AI what humans want – how to train an RL algorithm to learn the right reward function that reflects what humans want it to do. DeepMind and FHI published a paper at UAI on safely interruptible agents, that formalizes what it means for an RL agent not to have incentives to avoid shutdown. MIRI made an impressive breakthrough with their paper on logical inductors. I’m super excited about all these great papers coming out, and that our grant program contributed to these results.

CONN: For Meia, the excitement about AI safety went beyond just the technical aspects of artificial intelligence.

CHITA-TEGMARK: I am very excited about the dialogue that FLI has catalyzed – and also engaged in – throughout 2016, and especially regarding the impact of technology on society. My training is in psychology; I’m a psychologist. So I’m very interested in the human aspect of technology development. I’m very excited about questions like, how are new technologies changing us? How ready are we to embrace new technologies? Or how our psychological biases may be clouding our judgement about what we’re creating and the technologies that we’re putting out there. Are these technologies beneficial for our psychological well-being, or are they not?

So it has been extremely interesting for me to see that these questions are being asked more and more, especially by artificial intelligence developers and also researchers. I think it’s so exciting to be creating technologies that really force us to grapple with some of the most fundamental aspects, I would say, of our own psychological makeup. For example, our ethical values, our sense of purpose, our well-being, maybe our biases and shortsightedness and shortcomings as biological human beings. So I’m definitely very excited about how the conversation regarding technology – and especially artificial intelligence – has evolved over the last year. I like the way it has expanded to capture this human element, which I find so important. But I’m also so happy to feel that FLI has been an important contributor to this conversation.

CONN: Meanwhile, as Max described earlier, FLI has also gotten much more involved in decreasing the risk of nuclear weapons, and Lucas helped spearhead one of our greatest accomplishments of the year.

PERRY: One of the things that I was most excited about was our success with our divestment campaign. After a few months, we had great success in our own local Boston area with helping the City of Cambridge to divest its $1 billion portfolio from nuclear weapon producing companies. And we see this as a really big and important victory within our campaign to help institutions, persons, and universities to divest from nuclear weapons producing companies.

CONN: And in order to truly be effective we need to reach an international audience, which is something Dave has been happy to see grow this year.

STANLEY: I’m mainly excited about – at least, in my work – the increasing involvement and response we’ve had from the international community in terms of reaching out about these issues. I think it’s pretty important that we engage the international community more, and not just academics. Because these issues – things like nuclear weapons and the increasing capabilities of artificial intelligence – really will affect everybody. And they seem to be really underrepresented in mainstream media coverage as well.

So far, we’ve had pretty good responses just in terms of volunteers from many different countries around the world being interested in getting involved to help raise awareness in their respective communities, either through helping develop apps for us, or translation, or promoting just through social media these ideas in their little communities.

CONN: Many FLI members also participated in both local and global events and projects, like the following we’re about  to hear from Victoria, Richard, Lucas and Meia.

KRAKOVNA: The EAGX Oxford Conference was a fairly large conference. It was very well organized, and we had a panel there with Demis Hassabis, Nate Soares from MIRI, Murray Shanahan from Imperial, Toby Ord from FHI, and myself. I feel like overall, that conference did a good job of, for example, connecting the local EA community with the people at DeepMind, who are really thinking about AI safety concerns like Demis and also Sean Legassick, who also gave a talk about the ethics and impacts side of things. So I feel like that conference overall did a good job of connecting people who are thinking about these sorts of issues, which I think is always a great thing.  

MALLAH: I was involved in this endeavor with IEEE regarding autonomy and ethics in autonomous systems, sort of representing FLI’s positions on things like autonomous weapons and long-term AI safety. One thing that came out this year – just a few days ago, actually, due to this work from IEEE – is that the UN actually took the report pretty seriously, and it may have influenced their decision to take up the issue of autonomous weapons formally next year. That’s kind of heartening.

PERRY: A few different things that I really enjoyed doing were giving a few different talks at Duke and Boston College, and a local effective altruism conference. I’m also really excited about all the progress we’re making on our nuclear divestment application. So this is an application that will allow anyone to search their mutual fund and see whether or not their mutual funds have direct or indirect holdings in nuclear weapons-producing companies.

CHITA-TEGMARK:  So, a wonderful moment for me was at the conference organized by Yann LeCun in New York at NYU, when Daniel Kahneman, one of my thinker-heroes, asked a very important question that really left the whole audience in silence. He asked, “Does this make you happy? Would AI make you happy? Would the development of a human-level artificial intelligence make you happy?” I think that was one of the defining moments, and I was very happy to participate in this conference.

Later on, David Chalmers, another one of my thinker-heroes – this time, not the psychologist but the philosopher – organized another conference, again at NYU, trying to bring philosophers into this very important conversation about the development of artificial intelligence. And again, I felt there too, that FLI was able to contribute and bring in this perspective of the social sciences on this issue.

CONN: Now, with 2016 coming to an end, it’s time to turn our sites to 2017, and FLI is excited for this new year to be even more productive and beneficial.

TEGMARK: We at the Future of Life Institute are planning to focus primarily on artificial intelligence, and on reducing the risk of accidental nuclear war in various ways. We’re kicking off by having an international conference on artificial intelligence, and then we want to continue throughout the year providing really high-quality and easily accessible information on all these key topics, to help inform on what happens with climate change, with nuclear weapons, with lethal autonomous weapons, and so on.

And looking ahead here, I think it’s important right now – especially since a lot of people are very stressed out about the political situation in the world, about terrorism, and so on – to not ignore the positive trends and the glimmers of hope we can see as well.

CONN: As optimistic as FLI members are about 2017, we’re all also especially hopeful and curious to see what will happen with continued AI safety research.

AGUIRRE: I would say I’m looking forward to seeing in the next year more of the research that comes out, and really sort of delving into it myself, and understanding how the field of artificial intelligence and artificial intelligence safety is developing. And I’m very interested in this from the forecast and prediction standpoint.

I’m interested in trying to draw some of the AI community into really understanding how artificial intelligence is unfolding – in the short term and the medium term – as a way to understand, how long do we have? Is it, you know, if it’s really infinity, then let’s not worry about that so much, and spend a little bit more on nuclear weapons and global warming and biotech, because those are definitely happening. If human-level AI were 8 years away… honestly, I think we should be freaking out right now. And most people don’t believe that, I think most people are in the middle it seems, of thirty years or fifty years or something, which feels kind of comfortable. Although it’s not that long, really, on the big scheme of things. But I think it’s quite important to know now, which is it? How fast are these things, how long do we really have to think about all of the issues that FLI has been thinking about in AI? How long do we have before most jobs in industry and manufacturing are replaceable by a robot being slotted in for a human? That may be 5 years, it may be fifteen… It’s probably not fifty years at all. And having a good forecast on those good short-term questions I think also tells us what sort of things we have to be thinking about now.

And I’m interested in seeing how this massive AI safety community that’s started develops. It’s amazing to see centers kind of popping up like mushrooms after a rain all over and thinking about artificial intelligence safety. This partnership on AI between Google and Facebook and a number of other large companies getting started. So to see how those different individual centers will develop and how they interact with each other. Is there an overall consensus on where things should go? Or is it a bunch of different organizations doing their own thing? Where will governments come in on all of this? I think it will be interesting times. So I look forward to seeing what happens, and I will reserve judgement in terms of my optimism.

KRAKOVNA: I’m really looking forward to AI safety becoming even more mainstream, and even more of the really good researchers in AI giving it serious thought. Something that happened in the past year that I was really excited about, that I think is also pointing in this direction, is the research agenda that came out of Google Brain called “Concrete Problems in AI Safety.” And I think I’m looking forward to more things like that happening, where AI safety becomes sufficiently mainstream that people who are working in AI just feel inspired to do things like that and just think from their own perspectives: what are the important problems to solve in AI safety? And work on them.

I’m a believer in the portfolio approach with regards to AI safety research, where I think we need a lot of different research teams approaching the problems from different angles and making different assumptions, and hopefully some of them will make the right assumption. I think we are really moving in the direction in terms of more people working on these problems, and coming up with different ideas. And I look forward to seeing more of that in 2017. I think FLI can also help continue to make this happen.

MALLAH: So, we’re in the process of fostering additional collaboration among people in the AI safety space. And we will have more announcements about this early next year. We’re also working on resources to help people better visualize and better understand the space of AI safety work, and the opportunities there and the work that has been done. Because it’s actually quite a lot.

I’m also pretty excited about fostering continued theoretical work and practical work in making AI more robust and beneficial. The work in value alignment, for instance, is not something we see supported in mainstream AI research. And this is something that is pretty crucial to the way that advanced AIs will need to function. It won’t be very explicit instructions to them; they’ll have to be making decision based on what they think is right. And what is right? It’s something that… or even structuring the way to think about what is right requires some more research.

STANLEY: We’ve had pretty good success at FLI in the past few years helping to legitimize the field of AI safety. And I think it’s going to be important because AI is playing a large role in industry and there’s a lot of companies working on this, and not just in the US. So I think increasing international awareness about AI safety is going to be really important.

CHITA-TEGMARK: I believe that the AI community has raised some very important questions in 2016 regarding the impact of AI on society. I feel like 2017 should be the year to make progress on these questions, and actually research them and have some answers to them. For this, I think we need more social scientists – among people from other disciplines – to join this effort of really systematically investigating what would be the optimal impact of AI on people. I hope that in 2017 we will have more research initiatives, that we will attempt to systematically study other burning questions regarding the impact of AI on society. Some examples are: how can we ensure the psychological well-being for people while AI creates lots of displacement on the job market as many people predict. How do we optimize engagement with technology, and withdrawal from it also? Will some people be left behind, like the elderly or the economically disadvantaged? How will this affect them, and how will this affect society at large?

What about withdrawal from technology? What about satisfying our need for privacy? Will we be able to do that, or is the price of having more and more customized technologies and more and more personalization of the technologies we engage with… will that mean that we will have no privacy anymore, or that our expectations of privacy will be very seriously violated? I think these are some very important questions that I would love to get some answers to. And my wish, and also my resolution, for 2017 is to see more progress on these questions, and to hopefully also be part of this work and answering them.

PERRY: In 2017 I’m very interested in pursuing the landscape of different policy and principle recommendations from different groups regarding artificial intelligence. I’m also looking forward to expanding out nuclear divestment campaign by trying to introduce divestment to new universities, institutions, communities, and cities.

CONN: In fact, some experts believe nuclear weapons pose a greater threat now than at any time during our history.

TEGMARK: I personally feel that the greatest threat to the world in 2017 is one that the newspapers almost never write about. It’s not terrorist attacks, for example. It’s the small but horrible risk that the U.S. and Russia for some stupid reason get into an accidental nuclear war against each other. We have 14,000 nuclear weapons, and this war has almost happened many, many times. So, actually what’s quite remarkable and really gives a glimmer of hope is that – however people may feel about Putin and Trump – the fact is they are both signaling strongly that they are eager to get along better. And if that actually pans out and they manage to make some serious progress in nuclear arms reduction, that would make 2017 the best year for nuclear weapons we’ve had in a long, long time, reversing this trend of ever greater risks with ever more lethal weapons.

CONN: Some FLI members are also looking beyond nuclear weapons and artificial intelligence, as I learned when I asked Dave about other goals he hopes to accomplish with FLI this year.

STANLEY: Definitely having the volunteer team – particularly the international volunteers – continue to grow, and then scale things up. Right now, we have a fairly committed core of people who are helping out, and we think that they can start recruiting more people to help out in their little communities, and really making this stuff accessible. Not just to academics, but to everybody. And that’s also reflected in the types of people we have working for us as volunteers. They’re not just academics. We have programmers, linguists, people having just high school degrees all the way up to Ph.D.’s, so I think it’s pretty good that this varied group of people can get involved and contribute, and also reach out to other people they can relate to.

CONN: In addition to getting more people involved, Meia also pointed out that one of the best ways we can help ensure a positive future is to continue to offer people more informative content.

CHITA-TEGMARK: Another thing that I’m very excited about regarding our work here at the Future of Life Institute is this mission of empowering people to information. I think information is very powerful and can change the way people approach things: they can change their beliefs, their attitudes, and their behaviors as well. And by creating ways in which information can be readily distributed to the people, and with which they can engage very easily, I hope that we can create changes. For example, we’ve had a series of different apps regarding nuclear weapons that I think have contributed a lot to peoples knowledge and has brought this issue to the forefront of their thinking.

CONN: Yet as important as it is to highlight the existential risks we must address to keep humanity safe, perhaps it’s equally important to draw attention to the incredible hope we have for the future if we can solve these problems. Which is something both Richard and Lucas brought up for 2017.

MALLAH: I’m excited about trying to foster more positive visions of the future, so focusing on existential hope aspects of the future. Which are kind of the flip side of existential risks. So we’re looking at various ways of getting people to be creative about understanding some of the possibilities, and how to differentiate the paths between the risks and the benefits.

PERRY: Yeah, I’m also interested in creating and generating a lot more content that has to do with existential hope. Given the current global political climate, it’s all the more important to focus on how we can make the world better.

CONN: And on that note, I want to mention one of the most amazing things I discovered this past year. It had nothing to do with technology, and everything to do with people. Since starting at FLI, I’ve met countless individuals who are dedicating their lives to trying to make the world a better place. We may have a lot of problems to solve, but with so many groups focusing solely on solving them, I’m far more hopeful for the future. There are truly too many individuals that I’ve met this year to name them all, so instead, I’d like to provide a rather long list of groups and organizations I’ve had the pleasure to work with this year. A link to each group can be found at, and I encourage you to visit them all to learn more about the wonderful work they’re doing. In no particular order, they are:

Machine Intelligence Research Institute

Future of Humanity Institute

Global Catastrophic Risk Institute

Center for the Study of Existential Risk

Ploughshares Fund

Bulletin of Atomic Scientists

Open Philanthropy Project

Union of Concerned Scientists

The William Perry Project

ReThink Media

Don’t Bank on the Bomb

Federation of American Scientists

Massachusetts Peace Action

IEEE (Institute for Electrical and Electronics Engineers)

Center for Human-Compatible Artificial Intelligence

Center for Effective Altruism

Center for Applied Rationality

Foresight Institute

Leverhulme Center for the Future of Intelligence

Global Priorities Project

Association for the Advancement of Artificial Intelligence

International Joint Conference on Artificial Intelligence

Partnership on AI

The White House Office of Science and Technology Policy

The Future Society at Harvard Kennedy School


I couldn’t be more excited to see what 2017 holds in store for us, and all of us at FLI look forward to doing all we can to help create a safe and beneficial future for everyone. But to end on an even more optimistic note, I turn back to Max.

TEGMARK: Finally, I’d like – because I spend a lot of my time thinking about our universe – to remind everybody that we shouldn’t just be focused on the next election cycle. We have not decades, but billions of years of potentially awesome future for life, on Earth and far beyond. And it’s so important to not let ourselves get so distracted by our everyday little frustrations that we lose sight of these incredible opportunities that we all stand to gain from if we can get along, and focus, and collaborate, and use technology for good.

Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants?

In the early 1900s, the Italian chemist Giacomo Ciamician recognized that fossil fuel use was unsustainable. And like many of today’s environmentalists, he turned to nature for clues on developing renewable energy solutions, studying the chemistry of plants and their use of solar energy. He admired their unparalleled mastery of photochemical synthesis—the way they use light to synthesize energy from the most fundamental of substances—and how “they reverse the ordinary process of combustion.”

In photosynthesis, Ciamician realized, lay an entirely renewable process of energy creation. When sunlight reaches the surface of a green leaf, it sets off a reaction inside the leaf. Chloroplasts, energized by the light, trigger the production of chemical products—essentially sugars—which store the energy such that the plant can later access it for its biological needs. It is an entirely renewable process; the plant harvests the immense and constant supply of solar energy, absorbs carbon dioxide and water, and releases oxygen. There is no other waste.

If scientists could learn to imitate photosynthesis by providing concentrated carbon dioxide and suitable catalyzers, they could create fuels from solar energy. Ciamician was taken by the seeming simplicity of this solution. Inspired by small successes in chemical manipulation of plants, he wondered, “does it not seem that, with well-adapted systems of cultivation and timely intervention, we may succeed in causing plants to produce, in quantities much larger than the normal ones, the substances which are useful to our modern life?”

In 1912, Ciamician sounded the alarm about the unsustainable use of fossil fuels, and he exhorted the scientific community to explore artificially recreating photosynthesis. But little was done. A century later, however, in the midst of a climate crisis, and armed with improved technology and growing scientific knowledge, his vision reached a major breakthrough.

After more than ten years of research and experimentation, Peidong Yang, a chemist at UC Berkeley, successfully created the first photosynthetic biohybrid system (PBS) in April 2015. This first-generation PBS uses semiconductors and live bacteria to do the photosynthetic work that real leaves do—absorb solar energy and create a chemical product using water and carbon dioxide, while releasing oxygen—but it creates liquid fuels. The process is called artificial photosynthesis, and if the technology continues to improve, it may become the future of energy.

How Does This System Work?

Yang’s PBS can be thought of as a synthetic leaf. It is a one-square-inch tray that contains silicon semiconductors and living bacteria; what Yang calls a semiconductor-bacteria interface.

In order to initiate the process of artificial photosynthesis, Yang dips the tray of materials into water, pumps carbon dioxide into the water, and shines a solar light on it. As the semiconductors harvest solar energy, they generate charges to carry out reactions within the solution. The bacteria take electrons from the semiconductors and use them to transform, or reduce, carbon dioxide molecules and create liquid fuels. In the meantime, water is oxidized on the surface of another semiconductor to release oxygen. After several hours or several days of this process, the chemists can collect the product.

With this first-generation system, Yang successfully produced butanol, acetate, polymers, and pharmaceutical precursors, fulfilling Ciamician’s once-far-fetched vision of imitating plants to create the fuels that we need. This PBS achieved a solar-to-chemical conversion efficiency of 0.38%, which is comparable to the conversion efficiency in a natural, green leaf.


A diagram of the first-generation artificial photosynthesis, with its four main steps.

Describing his research, Yang says, “Our system has the potential to fundamentally change the chemical and oil industry in that we can produce chemicals and fuels in a totally renewable way, rather than extracting them from deep below the ground.”

If Yang’s system can be successfully scaled up, businesses could build artificial forests that produce the fuel for our cars, planes, and power plants by following the same laws and processes that natural forests follow. Since artificial photosynthesis would absorb and reduce carbon dioxide in order to create fuels, we could continue to use liquid fuel without destroying the environment or warming the planet.

However, in order to ensure that artificial photosynthesis can reliably produce our fuels in the future, it has to be better than nature, as Ciamician foresaw. Our need for renewable energy is urgent, and Yang’s model must be able to provide energy on a global scale if it is to eventually replace fossil fuels.

Recent Developments in Yang’s Artificial Photosynthesis

Since the major breakthrough in April 2015, Yang has continued to improve his system in hopes of eventually producing fuels that are commercially viable, efficient, and durable.

In August 2015, Yang and his team tested his system with a different type of bacteria. The method is the same, except instead of electrons, the bacteria use molecular hydrogen from water molecules to reduce carbon dioxide and create methane, the primary component of natural gas. This process is projected to have an impressive conversion efficiency of 10%, which is much higher than the conversion efficiency in natural leaves.

A conversion efficiency of 10% could potentially be commercially viable, but since methane is a gas it is more difficult to use than liquid fuels such as butanol, which can be transferred through pipes. Overall, this new generation of PBS needs to be designed and assembled in order to achieve a solar-to-liquid-fuel efficiency above 10%.


A diagram of this second-generation PBS that produces methane.

In December 2015, Yang advanced his system further by making the remarkable discovery that certain bacteria could grow the semiconductors by themselves. This development short-circuited the two-step process of growing the nanowires and then culturing the bacteria in the nanowires. The improved semiconductor-bacteria interface could potentially be more efficient in producing acetate, as well as other chemicals and fuels, according to Yang. And in terms of scaling up, it has the greatest potential.


A diagram of this third-generation PBS that produces acetate.

In the past few weeks, Yang made yet another important breakthrough in elucidating the electron transfer mechanism between the semiconductor-bacteria interface. This sort of fundamental understanding of the charge transfer at the interface will provide critical insights for the designing of the next generation PBS with better efficiency and durability. He will be releasing the details of this breakthrough shortly.

Despite these important breakthroughs and modifications to the PBS, Yang clarifies, “the physics of the semiconductor-bacteria interface for the solar driven carbon dioxide reduction is now established.” As long as he has an effective semiconductor that absorbs solar energy and feeds electrons to the bacteria, the photosynthetic function will initiate, and the remarkable process of artificial photosynthesis will continue to produce liquid fuels.

Why This Solar Power Is Unique

Peter Forbes, a science writer and the author of Nanoscience: Giants of the Infinitesimal, admires Yang’s work in creating this system. He writes, “It’s a brilliant synthesis: semiconductors are the most efficient light harvesters, and biological systems are the best scavengers of CO2.”

Yang’s artificial photosynthesis only relies on solar energy. But it creates a more useable source of energy than solar panels, which are currently the most popular and commercially viable form of solar power. While the semiconductors in solar panels absorb solar energy and convert it into electricity, in artificial photosynthesis, the semiconductors absorb solar energy and store it in “the carbon-carbon bond or the carbon-hydrogen bond of liquid fuels like methane or butanol.”

This difference is crucial. The electricity generated from solar panels simply cannot meet our diverse energy needs, but these renewable liquid fuels and natural gases can. Unlike solar panels, Yang’s PBS absorbs and breaks down carbon dioxide, releases oxygen, and creates a renewable fuel that can be collected and used. With artificial photosynthesis creating our fuels, driving cars and operating machinery becomes much less harmful. As Katherine Bourzac phrases nicely, “This is one of the best attempts yet to realize the simple equation: sun + water + carbon dioxide = sustainable fuel.”

The Future of Artificial Photosynthesis

Yang’s PBS has been advancing rapidly, but he still has work to do before the technology can be considered commercially viable. Despite encouraging conversion efficiencies, especially with methane, the PBS is not durable enough or cost-effective enough to be marketable.

In order to improve this system, Yang and his team are working to figure out how to replace bacteria with synthetic catalysts. So far, bacteria have proven to be the most efficient catalysts, and they also have high selectivity—that is, they can create a variety of useful compounds such as butanol, acetate, polymers and methane. But since bacteria live and die, they are less durable than a synthetic catalyst and less reliable if this technology is scaled up.

Yang has been testing PBS’s with live bacteria and synthetic catalysts in parallel systems in order to discover which type works best. “From the point of view of efficiency and selectivity of the final product, the bacteria approach is winning,” Yang says, “but if down the road we can find a synthetic catalyst that can produce methane and butanol with similar selectivity, then that is the ultimate solution.” Such a system would give us the ideal fuels and the most durable semiconductor-catalyst interface that can be reliably scaled up.

Another concern is that, unlike natural photosynthesis, artificial photosynthesis requires concentrated carbon dioxide to function. This is easy to do in the lab, but if artificial photosynthesis is scaled up, Yang will have to find a feasible way of supplying concentrated carbon dioxide to the PBS. Peter Forbes argues that Yang’s artificial photosynthesis could be “coupled with carbon-capture technology to pull COfrom smokestack emissions and convert it into fuel”. If this could be done, artificial photosynthesis would contribute to a carbon-neutral future by consuming our carbon emissions and releasing oxygen. This is not the focus of Yang’s research, but it is an integral piece of the puzzle that other scientists must provide if artificial photosynthesis is to supply the fuels we need on a large scale.

When Giacomo Ciamician considered the future of artificial photosynthesis, he imagined a future of abundant energy where humans could master the “photochemical processes that hitherto have been the guarded secret of the plants…to make them bear even more abundant fruit than nature, for nature is not in a hurry and mankind is.” And while the rush was not apparent to scientists in 1912, it is clear now, in 2016.

Peidong Yang has already created a system of artificial photosynthesis that out-produces nature. If he continues to increase the efficiency and durability of his PBS, artificial photosynthesis could revolutionize our energy use and serve as a sustainable model for generations to come. As long as the sun shines, artificial photosynthesis can produce fuels and consume waste. And in this future of artificial photosynthesis, the world would be able to grow and use fuels freely; knowing that the same, natural process that created them would recycle the carbon at the other end.

Yang shares this hope for the future. He explains, “Our vision of a cyborgian evolution—biology augmented with inorganic materials—may bring the PBS concept to full fruition, selectively combining the best of both worlds, and providing society with a renewable solution to solve the energy problem and mitigate climate change.”

If you would like to learn more about Peidong Yang’s research, please visit his website at

Developing Countries Can’t Afford Climate Change

Developing countries currently cannot sustain themselves, let alone grow, without relying heavily on fossil fuels. Global warming typically takes a back seat to feeding, housing, and employing these countries’ citizens. Yet the weather fluctuations and consequences of climate change are already impacting food growth in many of these countries. Is there a solution?

Developing Countries Need Fossil Fuels

Fossil fuels are still the cheapest, most reliable energy resources available. When a developing country wants to build a functional economic system and end rampant poverty, it turns to fossil fuels.

India, for example, is home to one-third of the world’s 1.2 billion citizens living in poverty. That’s 400 million people in one country without sufficient food or shelter (for comparison, the entire U.S. population is roughly 323 million people). India hopes to transition to renewable energy as its economy grows, but the investment needed to meet its renewable energy goals “is equivalent to over four times the country’s annual defense spending, and over ten times the country’s annual spending on health and education.”

Unless something changes, developing countries like India cannot fight climate change and provide for their citizens. In fact, developing countries will only accelerate global warming as their economies grow because they cannot afford alternatives. Wealthy countries cannot afford to ignore the impact of these growing, developing countries.

The Link Between Economic Growth and CO2

According to a World Bank report, “poor and middle-income countries already account for just over half of total carbon emissions.” And this percentage will only rise as developing countries grow. Achieving a global society in which all citizens earn a living wage and climate catastrophe is averted requires breaking the link between economic growth and increasing carbon emissions in developing countries.

Today, most developing countries that decrease their poverty rates also have increased rates of carbon emissions. In East Asia and the Pacific, the number of people living in extreme poverty declined from 1.1 billion to 161 million between 1981 and 2011—an 85% decrease. In this same time period, the amount of carbon dioxide per capita rose from 2.1 tons per capita to 5.9 tons per capita—a 185% increase.

South Asia saw similar changes during this time frame. As the number of people living in extreme poverty decreased by 30%, the amount of carbon dioxide increased by 204%.

In Sub-Saharan Africa, the number of people living in poverty increased by 98% in this thirty-year span, while carbon dioxide per capita decreased by 17%. Given the current energy situation, if sub-Saharan Africans are to escape extreme poverty, they will have to increase their carbon use—unless developing countries step in to offer clean alternatives.

Carbon Emissions Rate Vs. Total

Many wealthier countries have been researching alternative forms of energy for decades. And that work may be starting to pay off.

New data shows that, since the year 2000, 21 developed countries have reduced annual greenhouse gas emissions while simultaneously growing their economies. Moreover, this isn’t all related to a drop in the industrial sector. Uzbekistan, Bulgaria, Switzerland, and the Czech Republic demonstrated that countries do not need to shrink their industrial sectors to break the link between economic growth and increased greenhouse gas emissions.

Most importantly, global carbon emissions stalled from 2014 to 2015 as the global economy grew.

But is this rate of global decoupling fast enough to keep the planet from warming another two degrees Celsius? When emissions stall at 32.1 billion metric tons for two years, that’s still 64.2 billion metric tons of carbon being pumped into the atmosphere over two years.

The carbon emissions rate might fall, but the total continues to grow enormously. A sharp decline in carbon emissions is necessary to keep the planet at a safe global temperature. At the 2015 Paris Climate Conference, the United Nations concluded that in order to keep global temperatures from rising another two degrees Celsius, global carbon emissions “must fall to net zero in the second half of the century.”

In order to encourage this, the Paris agreement included measures to ensure that wealthy countries finance developing countries “with respect to both mitigation and adaptation.” For mitigation, countries are expected to abide by their pledges to reduce emissions and use more renewable energy, and for adaptation, the deal sets a global goal for “enhancing adaptive capacity, strengthening resilience and reducing vulnerability to climate change.”

Incentivizing R&D

One way wealthy countries can benefit both themselves and developing countries is through research and development. As wealthier countries develop cheaper forms of alternative energy, developing countries can take advantage of the new technologies. Wealthy countries can also help subsidize renewable energy for countries dealing with higher rates of poverty.

Yet, as of 2014, wealthy countries had invested very little in this process, providing only 0.2% of developing countries’ GDP for adaptation and mitigation. Moreover, a 2015 paper from the IMF revealed that while we spend $100 billion per year subsidizing renewable energy, we spend an estimated $5.3 trillion subsidizing fossil fuels. This fossil fuel subsidy includes “the uncompensated costs of air pollution, congestion and global warming.”

Such a huge disparity indicates that wealthy countries either need stronger incentives or stronger legal obligations to shift this fossil fuel money towards renewable energy. The Paris agreement intends to strengthen legal obligations, but its language is vague, and it lacks details that would ensure wealthy countries follow through with their responsibilities.

However, despite the shortcomings of legal obligations, monetary incentives do exist. India, for example, wants to vastly increase its solar power capacity to address this global threat. They need $100 billion to fund this expansion, which could spell a huge opportunity for U.S. banks, according to Raymond Vickery, an expert on U.S-India economic ties. This would be a boon for the U.S. economy, and it would set an important precedent for other wealthy countries to assist and invest in developing countries.

However, global leaders need to move quickly. The effects of global warming already threaten the world and the economies of developing countries, especially India.

Global Impact of Climate Change

India relies on the monsoon cycle to water crops and maintain its “nearly $370 billion agricultural sector and hundreds of millions of jobs.” Yet as the Indian Ocean has warmed, the monsoon cycle has become unreliable, resulting in massive droughts and dying crops.

Across the globe, scientists expect developing countries such as India to be hit hardest by rising temperatures and changes in rainfall. Furthermore, these countries with limited financial resources and weak infrastructure will struggle to adapt and sustain their economic growth in the face of changing climate. Nicholas Stern predicts that a two-degree rise in temperature would cost about 1% of world GDP. But the World Bank estimates that it would cost India 5% of their GDP.

Moreover, changes such as global warming act as “threat multipliers” because they increase the likelihood of other existential threats. In India, increased carbon dioxide emissions have contributed to warmer temperatures, which have triggered extensive droughts and increased poverty. But the problems don’t end here. Higher levels of hunger and poverty can magnify political tensions, potentially leading to conflict and even nuclear war. India and Pakistan both have nuclear weapons—if drought expands and cripples their economies, violence can more easily erupt.

Alternatively, wealthy nations could capitalize on investment opportunities in developing countries. In doing so, their own economies will benefit while simultaneously aiding the effort to reach net zero carbon emissions.

Global warming is, by definition, a global crisis. Mitigating this threat will require global cooperation and global solutions.

Op-ed: Being Alarmed Is Not the Same as Being an Alarmist

When the evidence clearly suggests that we’re heading toward a catastrophe, scientists shouldn’t hesitate to make their feelings known to the public. So, at what point should scientists begin to publicly worry about the environment?

Scientists are trained to report their findings in a disinterested manner. The aim is to be as objective as possible, and this means bracketing one’s feelings in favor of the facts.

But what happens when the evidence suggests that humanity is racing towards a global, irreversible disaster? What happens when the results of scientific inquiry clearly warrant activism in favor of a particular law or policy?

Once in a while, scientists do express their personal thoughts about the results of scientific research. For example, in 2012, a geophysics researcher from the University of San Diego, Brad Werner, gave a presentation at the large, annual American Geophysical Union conference. His talk was titled “Is Earth F**cked?,” and as he told a reporter for iO9 afterwards, the answer is “more or less.”

Two years later, after a group of scientists found “vast methane plumes escaping from the seafloor,” the glaciologist Jason Box echoed Werner’s pessimism, tweeting: “If even a small fraction of Arctic sea floor carbon is released to the atmosphere, we’re f ’d.”

Rewriting Records

There’s good reason for scientists to be honest and open about the implications of their research. The environmental situation today really is dire.

According to Gavin Schmidt of NASA’s Goddard Institute of Space Studies, there’s a 99% probability that 2016 will become the hottest year on record, surpassing the previous record set by 2015, which itself surpassed the previous record set by 2014. In fact, the hottest 16 years have all occurred since 2000, with only a single exception (1998).

Even more, last June was the 14th consecutive month to set a temperature record. And in July, Kuwait experienced the highest temperature ever recorded in the Eastern hemisphere, with temperatures reaching 129.2 degrees (F). In nearby Iraq, the mercury peaked at 129.0 degrees. As Jason Samenow notes, “It’s also possible that [the] 129.2-degree reading matches the hottest ever reliably measured anywhere in the world” (italics added).

Meanwhile, the amount of carbon dioxide in the atmosphere continues to climb at a meteoric rate. Before the Industrial Revolution, the concentration was 280 parts per million (ppm). But recent years have seen it surpass 400 ppm. Initially, this has occurred for only  part of the year because of the seasonal life cycles of plants, which remove atmospheric carbon dioxide.

Last year, though, the average concentration of carbon dioxide exceeded 400 ppm for the first time ever. And scientists are now saying that “carbon dioxide will never fall below 400 ppm this year, nor the next, nor the next.” In other words, no human alive today will ever again experience an atmosphere with less than 400 ppm. As the meteorologist Richard Betts puts it, “These numbers are … a reminder of the long-term effects we’re having on the system.”

Worrisome Weather

Along with record-breaking temperatures and changes to atmospheric chemistry, recent months have seen many extreme weather events. This is in part due to the 2015-2016 El Niño climate cycle, which has been “probably the most powerful in the last 100 years.”

But the more fundamental driver of extreme weather is climate change. Research shows that climate change will result in more severe floods, droughts, heat waves, and hurricanes. According to a study conducted by scientists at NASA, Cornell, and Columbia universities, we should expect “megadroughts” in the US lasting decades.

Another study predicts that certain regions could experience heat waves so scorching that “one would overheat even if they were naked in the shade, soaking wet and standing in front of a large fan.” Yet another report found that lightning strikes will increase by 50% this century.

Until recently, it was difficult for climatologists to link particular instances of extreme weather with human-caused changes to the climate. Asking whether climate change caused event X is like asking whether smoking caused Jack’s lung cancer. A doctor can explain that Jack-the-smoker is statistically more likely to get cancer than Jack-the-nonsmoker. However, a direct link is indiscernible.

But this situation is changing, as a recent report from the National Academy of Sciences affirms. Scientists are increasingly able to connect climate change with particular instances of extreme weather. And the results are worrisome.

For example, a study from last year links climate change to the 2007-2010 Syrian drought. This record-breaking event fueled the Syrian civil war by instigating a large migration of farmers into Syria’s urban centers. Furthermore, this conflict gave rise to terrorist groups like the Islamic State and Jabhat al-Nusra (al-Qaeda’s Syrian affiliate). In other words, one can trace an unbroken series of causes from climate change to the Syrian civil war to terrorism.

Panicking in Public

Climate change is a clear and present danger. Scientists don’t debate about whether it’s occurring. Nor do they disagree that its consequences will be global, catastrophic, and irreversible. According to the World Bank, “the global community is not prepared for a swift increase in climate change-related natural disasters — such as floods and droughts — which will put 1.3 billion people at risk by 2050.”

Given the high stakes and the well-established science, scientists should be waving their arms and shouting, “The situation is urgent! We must act now! The future of civilization depends upon it!” In the process, they should take care to distinguish between the distinct attitudes of “being alarmed” and “being an alarmist,” which many pundits, politicians, and journalists often conflate. The first occurs when one responds proportionally to the best available evidence. The second is what happens when one’s fear and anxiety go beyond the evidence.

Being alarmed is the appropriate response to an alarming situation, and the situation today really is alarming.

The ongoing catastrophe of climate change is not out of our control. But if we don’t act soon, Werner could be right that Earth is, well, in bad shape.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

Podcast: Could an Earthquake Destroy Humanity?

Earthquakes as Existential Risks

Earthquakes are not typically considered existential or even global catastrophic risks, and for good reason: they’re localized events. While they may be devastating to the local community, rarely do they impact the whole world. But is there some way an earthquake could become an existential or catastrophic risk? Could a single earthquake put all of humanity at risk? In our increasingly connected world, could an earthquake sufficiently exacerbate a biotech, nuclear or economic hazard, triggering a cascading set of circumstances that could lead to the downfall of modern society?

Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of FLI consider extreme earthquake scenarios to figure out if there’s any way such a risk is remotely plausible. This podcast was produced in a similar vein to Myth Busters and xkcd’s What If series.

We only consider a few scenarios in this podcast, but we’d love to hear from other people. Do you have ideas for an extreme situation that could transform a locally devastating earthquake into a global calamity?

This episode features insight from seismologist Martin Chapman of Virginia Tech.

Op-ed: Climate Change Is the Most Urgent Existential Risk

Climate change and biodiversity loss may pose the most immediate and important threat to human survival given their indirect effects on other risk scenarios.

Humanity faces a number of formidable challenges this century. Threats to our collective survival stem from asteroids and comets, supervolcanoes, global pandemics, climate change, biodiversity loss, nuclear weapons, biotechnology, synthetic biology, nanotechnology, and artificial superintelligence.

With such threats in mind, an informal survey conducted by the Future of Humanity Institute placed the probability of human extinction this century at 19%. To put this in perspective, it means that the average American is more than a thousand times more likely to die in a human extinction event than a plane crash.*

So, given limited resources, which risks should we prioritize? Many intellectual leaders, including Elon Musk, Stephen Hawking, and Bill Gates, have suggested that artificial superintelligence constitutes one of the most significant risks to humanity. And this may be correct in the long-term. But I would argue that two other risks, namely climate change and biodiveristy loss, should take priority right now over every other known threat.

Why? Because these ongoing catastrophes in slow-motion will frame our existential predicament on Earth not just for the rest of this century, but for literally thousands of years to come. As such, they have the capacity to raise or lower the probability of other risks scenarios unfolding.

Multiplying Threats

Ask yourself the following: are wars more or less likely in a world marked by extreme weather events, megadroughts, food supply disruptions, and sea-level rise? Are terrorist attacks more or less likely in a world beset by the collapse of global ecosystems, agricultural failures, economic uncertainty, and political instability?

Both government officials and scientists agree that the answer is “more likely.” For example, the current Director of the CIA, John Brennan, recently identified “the impact of climate change” as one of the “deeper causes of this rising instability” in countries like Syria, Iraq, Yemen, Libya, and Ukraine. Similarly, the former Secretary of Defense, Chuck Hagel, has described climate change as a “threat multiplier” with “the potential to exacerbate many of the challenges we are dealing with today — from infectious disease to terrorism.”

The Department of Defense has also affirmed a connection. In a 2015 report, it states, “Global climate change will aggravate problems such as poverty, social tensions, environmental degradation, ineffectual leadership and weak political institutions that threaten stability in a number of countries.”

Scientific studies have further shown a connection between the environmental crisis and violent conflicts. For example, a 2015 paper in the Proceedings of the National Academy of Sciences argues that climate change was a causal factor behind the record-breaking 2007-2010 drought in Syria. This drought led to a mass migration of farmers into urban centers, which fueled the 2011 Syrian civil war. Some observers, including myself, have suggested that this struggle could be the beginning of World War III, given the complex tangle of international involvement and overlapping interests.

The study’s conclusion is also significant because the Syrian civil war was the Petri dish in which the Islamic State consolidated its forces, later emerging as the largest and most powerful terrorist organization in human history.

A Perfect Storm

The point is that climate change and biodiversity loss could very easily push societies to the brink of collapse. This will exacerbate existing geopolitical tensions and introduce entirely new power struggles between state and nonstate actors. At the same time, advanced technologies will very likely become increasingly powerful and accessible. As I’ve written elsewhere, the malicious agents of the future will have bulldozers rather than shovels to dig mass graves for their enemies.

The result is a perfect storm of more conflicts in the world along with unprecedentedly dangerous weapons.

If the conversation were to end here, we’d have ample reason for placing climate change and biodiversity loss at the top of our priority lists. But there are other reasons they ought to be considered urgent threats. I would argue that they could make humanity more vulnerable to a catastrophe involving superintelligence and even asteroids.

The basic reasoning is the same for both cases. Consider superintelligence first. Programming a superintelligence whose values align with ours is a formidable task even in stable circumstances. As Nick Bostrom argues in his 2014 book, we should recognize the “default outcome” of superintelligence to be “doom.”

Now imagine trying to solve these problems amidst a rising tide of interstate wars, civil unrest, terrorist attacks, and other tragedies? The societal stress caused by climate change and biodiversity loss will almost certainly compromise important conditions for creating friendly AI, such as sufficient funding, academic programs to train new scientists, conferences on AI, peer-reviewed journal publications, and communication/collaboration between experts of different fields, such as computer science and ethics.

It could even make an “AI arms race” more likely, thereby raising the probability of a malevolent superintelligence being created either on purpose or by mistake.

Similarly, imagine that astronomers discover a behemoth asteroid barreling toward Earth. Will designing, building, and launching a spacecraft to divert the assassin past our planet be easier or more difficult in a world preoccupied with other survival issues?

In a relatively peaceful world, one could imagine an asteroid actually bringing humanity together by directing our attention toward a common threat. But if the “conflict multipliers” of climate change and biodiversity loss have already catapulted civilization into chaos and turmoil, I strongly suspect that humanity will become more, rather than less, susceptible to dangers of this sort.

Context Risks

We can describe the dual threats of climate change and biodiversity loss as “context risks.” Neither is likely to directly cause the extinction of our species. But both will define the context in which civilization confronts all the other threats before us. In this way, they could indirectly contribute to the overall danger of annihilation — and this worrisome effect could be significant.

For example, according to the Intergovernmental Panel on Climate Change, the effects of climate change will be “severe,” “pervasive,” and “irreversible.” Or, as a 2016 study published in Nature and authored by over twenty scientists puts it, the consequences of climate change “will extend longer than the entire history of human civilization thus far.”

Furthermore, a recent article in Science Advances confirms that humanity has already escorted the biosphere into the sixth mass extinction event in life’s 3.8 billion year history on Earth. Yet another study suggests that we could be approaching a sudden, irreversible, catastrophic collapse of the global ecosystem. If this were to occur, it could result in “widespread social unrest, economic instability and loss of human life.”

Given the potential for environmental degradation to elevate the likelihood of nuclear wars, nuclear terrorism, engineered pandemics, a superintelligence takeover, and perhaps even an impact winter, it ought to take precedence over all other risk concerns — at least in the near-term. Let’s make sure we get our priorities straight.

* How did I calculate this? First, the average American’s lifetime chance of dying in an “Air and space transport accident” was 1 in 9737 as of 2013, according to the Insurance Information Institute. The US life expectancy is currently 78.74 years, which we can round up to 80 years for simplicity. Second, the informal Future of Humanity Institute (FHI) survey puts the probability of human extinction this century at 19%. Assuming independence, it follows that the probability of human extinction in an 80-year period (the US life expectancy) is 15.5%. Finally, the last step is to figure out the difference between the 15.5% figure and the 1 in 9737 statistic. To do this, divide .155 by 1/9737. This gives 1509.235. And from here we can conclude that, if the FHI survey is accurate, “the average American is more than a thousand times more likely to die in a human extinction event than a plane crash.”

Congress Subpoenas Climate Scientists in Effort to Hamper ExxonMobil Fraud Investigation

ExxonMobil executives may have intentionally misled the public about climate change – for decades. And the House Science Committee just hampered legal efforts to learn more about ExxonMobil’s actions by subpoenaing the nonprofit scientists who sought to find out what the fossil fuel giant knew and when.

For 40 years, tobacco companies intentionally misled consumers to believe that smoking wasn’t harmful. Now it appears that many in the fossil fuel industry may have applied similarly deceptive tactics – and for just as long – to confuse the public about the dangers of climate change.

Investigative research by nonprofit groups like InsideClimate News and the Union of Concerned Scientists (UCS) have turned up evidence that ExxonMobil may have known about the hazards of fossil-fuel driven climate change back in the 1970s. However, rather than informing the public or taking steps to reduce such risks, documents indicate that ExxonMobil leadership chose to cover up their findings and instead convince the public that climate science couldn’t be trusted.

As a result of these findings, the Attorneys General (AGs) from New York and Massachusetts launched a legal investigation to determine if ExxonMobil committed fraud, including subpoenaing the company for more information. That’s when the House Science, Space and Technology Committee Chairman Lamar Smith stepped in.

Chairman Smith, under powerful new House rules, unilaterally subpoenaed not just the AGs, but also many of the nonprofits involved in the ExxonMobil investigation, including groups like the UCS. Smith and other House representatives argue that they’re merely supporting ExxonMobil’s rights to free speech and to form opinions based on scientific research.

However, no one is targeting ExxonMobil for expressing an opinion. The Attorneys General and the nonprofits are investigating what may have been intentional fraud.

In a public statement, Ken Kimmell, president of the Union of Concerned Scientists said:

“We do not accept Chairman Smith’s premise that fraud, if committed by ExxonMobil, is protected by the First Amendment. It’s beyond ironic for Chairman Smith to violate our actual free speech rights in the name of protecting ExxonMobil’s supposed right to misrepresent the work of its own scientists and deceive shareholders and the public. […]

“Smith is misusing the House Science Committee’s subpoena power in a way that should concern everyone across the political spectrum. Today, the target is UCS and others concerned about climate change. But if these kinds of subpoenas are allowed, who will be next and on what basis?”

In fact, Chairman Smith also subpoenaed climate scientists at the National Ocean and Atmospheric Administration (NOAA) in the fall of 2015 and again earlier this year. UCS representatives are referring to this as a blatant “abuse of power” on the part of the government and ExxonMobil.

Gretchen Goldman, a lead analyst for UCS, wrote: “Abuse of power is when a company exploits its vast political network to squash policies that would address climate change.”

The complete list of nonprofits subpoenaed by Chairman Smith includes:, the Climate Accountability Institute, the Climate Reality Project, Greenpeace, Pawa Law Group PC, the Rockefeller Brothers Fund, the Rockefeller Family Fund, and the Union of Concerned Scientists.

Editorial note:

At FLI, we strive to remain nonpartisan and apolitical. Our goal — to ensure a bright future for humanity — clearly spans the political spectrum. However, we cannot, in good conscience, stand back and simply witness this political attack on science in silence. To understand and mitigate climate change, we need scientific research. We need political leaders to let scientists do their jobs without intimidation.

The Problem with Brexit: 21st Century Challenges Require International Cooperation

Retreating from international institutions and cooperation will handicap humanity as we tackle our greatest problems.

The UK’s referendum in favor of leaving the EU and the rise of nationalist ideologies in the US and Europe is worrying on multiple fronts. Nationalism espoused by the likes of Donald Trump (U.S.), Nigel Farage (U.K.), Marine Le Pen (France), and Heinz-Christian Strache (Austria) may lead to a resurgence of some of the worst problems of the first half of 20th century. These leaders are calling for policies that would constrain trade and growth, encourage domestic xenophobia, and increase rivalries and suspicion between countries.

Even more worrying, however, is the bigger picture. In the 21st century, our greatest challenges will require global solutions. Retreating from international institutions and cooperation will handicap humanity’s ability to address our most pressing upcoming challenges.

The Nuclear Age

Many of the challenges of the 20th century – issues of public health, urbanization, and economic and educational opportunity – were national problems that could be dealt with at the national level. July 16th, 1945 marked a significant turning point. On that day, American scientists tested the first nuclear weapon in the New Mexican desert. For the first time in history, individual human beings had within their power a technology capable of destroying all of humanity.

Thus, nuclear weapons became the first truly global problem. Weapons with such a destructive force were of interest to every nation and person on the planet. Only international cooperation could produce a solution.

Despite a dangerous arms race between the US and the Soviet Union — including a history of close calls — humanity survived 70 years without a catastrophic global nuclear war. This was in large part due to international institutions and agreements that discouraged wars and further proliferation.

But what if we replayed the Cold War without the U.N. mediating disputes between nuclear adversaries? And without the bitter taste of the Second World War fresh in the minds of all who participated? Would we still have the same benign outcome?

We cannot say what such a revisionist history would look like, but the chances of a catastrophic outcome would surely be higher.

21st Century Challenges

The 21st century will only bring more challenges that are global in scope, requiring more international solutions. Climate change by definition requires a global solution since carbon emissions will lead to global warming regardless of which countries emit them.

In addition, continued development of new powerful technologies — such as artificial intelligence, biotechnologies, and nanotechnologies — will put increasingly large power in the hands of the people who develop and control them. These technologies have the potential to improve the human condition and solve some of our biggest problems. Yet they also have the potential to cause tremendous damage if misused.

Whether through accident, miscalculation, or madness, misuse of these powerful technologies could pose a catastrophic or even existential risk. If a Cold-War-style arms race for new technologies occurs, it is only a matter of time before a close call becomes a direct hit.

Working Together

As President Obama said in his speech at Hiroshima, “Technological progress without an equivalent progress in human institutions can doom us.”

Over the next century, technological progress can greatly improve the human experience. To ensure a positive future, humanity must find the wisdom to handle the increasingly powerful technologies that it is likely to produce and to address the global challenges that are likely to arise.

Experts have blamed the resurgence of nationalism on anxieties over globalization, multiculturalism, and terrorism. Whatever anxieties there may be, we live in a global world where our greatest challenges are increasingly global, and we need global solutions. If we resist international cooperation, we will battle these challenges with one, perhaps both, arms tied behind our back.

Humanity must learn to work together to tackle the global challenges we face. Now is the time to strengthen international institutions, not retreat from them.

Existential Risks Are More Likely to Kill You Than Terrorism

People tend to worry about the wrong things.

According to a 2015 Gallup Poll, 51% of Americans are “very worried” or “somewhat worried” that a family member will be killed by terrorists. Another Gallup Poll found that 11% of Americans are afraid of “thunder and lightning.” Yet the average person is at least four times more likely to die from a lightning bolt than a terrorist attack.

Similarly, statistics show that people are more likely to be killed by a meteorite than a lightning strike (here’s how). Yet I suspect that most people are less afraid of meteorites than lightning. In these examples and so many others, we tend to fear improbable events while often dismissing more significant threats.

One finds a similar reversal of priorities when it comes to the worst-case scenarios for our species: existential risks. These are catastrophes that would either annihilate humanity or permanently compromise our quality of life. While risks of this sort are often described as “high-consequence, improbable events,” a careful look at the numbers by leading experts in the field reveals that they are far more likely than most of the risks people worry about on a daily basis.

Let’s use the probability of dying in a car accident as a point of reference. Dying in a car accident is more probable than any of the risks mentioned above. According to the 2016 Global Challenges Foundation report, “The annual chance of dying in a car accident in the United States is 1 in 9,395.” This means that if the average person lived 80 years, the odds of dying in a car crash will be 1 in 120. (In percentages, that’s 0.01% per year, or 0.8% over a lifetime.)

Compare this to the probability of human extinction stipulated by the influential “Stern Review on the Economics of Climate Change,” namely 0.1% per year.* A human extinction event could be caused by an asteroid impact, supervolcanic eruption, nuclear war, a global pandemic, or a superintelligence takeover. Although this figure appears small, over time it can grow quite significant. For example, it means that the likelihood of human extinction over the course of a century is 9.5%. It follows that your chances of dying in a human extinction event are nearly 10 times higher than dying in a car accident.

But how seriously should we take the 9.5% figure? Is it a plausible estimate of human extinction? The Stern Review is explicit that the number isn’t based on empirical considerations; it’s merely a useful assumption. The scholars who have considered the evidence, though, generally offer probability estimates higher than 9.5%. For example, a 2008 survey taken during a Future of Humanity Institute conference put the likelihood of extinction this century at 19%. The philosopher and futurist Nick Bostrom argues that it “would be misguided” to assign a probability of less than 25% to an existential catastrophe before 2100, adding that “the best estimate may be considerably higher.” And in his book Our Final Hour, Sir Martin Rees claims that civilization has a fifty-fifty chance of making it through the present century.

My own view more or less aligns with Rees’, given that future technologies are likely to introduce entirely new existential risks. A discussion of existential risks five decades from now could be dominated by scenarios that are unknowable to contemporary humans, just like nuclear weapons, engineered pandemics, and the possibility of “grey goo” were unknowable to people in the fourteenth century. This suggests that Rees may be underestimating the risk, since his figure is based on an analysis of currently known technologies.

If these estimates are believed, then the average person is 19 times, 25 times, or even 50 times more likely to encounter an existential catastrophe than to perish in a car accident, respectively.

These figures vary so much in part because estimating the risks associated with advanced technologies requires subjective judgments about how future technologies will develop. But this doesn’t mean that such judgments must be arbitrary or haphazard: they can still be based on technological trends and patterns of human behavior. In addition, other risks like asteroid impacts and supervolcanic eruptions can be estimated by examining the relevant historical data. For example, we know that an impactor capable of killing “more than 1.5 billion people” occurs every 100,000 years or so, and supereruptions happen about once every 50,000 years.

Nonetheless, it’s noteworthy that all of the above estimates agree that people should be more worried about existential risks than any other risk mentioned.

Yet how many people are familiar with the concept of an existential risk? How often do politicians discuss large-scale threats to human survival in their speeches? Some political leaders — including one of the candidates currently running for president — don’t even believe that climate change is real. And there are far more scholarly articles published about dung beetles and Star Trek than existential risks. This is a very worrisome state of affairs. Not only are the consequences of an existential catastrophe irreversible — that is, they would affect everyone living at the time plus all future humans who might otherwise have come into existence — but the probability of one happening is far higher than most people suspect.

Given the maxim that people should always proportion their fears to the best available evidence, the rational person should worry about the above risks in the following order (from least to most risky): terrorism, lightning strikes, meteorites, car crashes, and existential catastrophes. The psychological fact is that our intuitions often fail to track the dangers around us. So, if we want to ensure a safe passage of humanity through the coming decades, we need to worry less about the Islamic State and al-Qaeda, and focus more on the threat of an existential catastrophe.

x-risksarielfigure*Editor’s note: To clarify, the 0.1% from the Stern Report is used here purely for comparison to the numbers calculated in this article. The number was an assumption made at Stern and has no empirical backing. You can read more about this here.

The Collective Intelligence of Women Could Save the World

Neil deGrasse Tyson was once asked about his thoughts on the cosmos. In a slow, gloomy voice, he intoned, “The universe is a deadly place. At every opportunity, it’s trying to kill us. And so is Earth. From sinkholes to tornadoes, hurricanes, volcanoes, tsunamis.” Tyson humorously described a very real problem: the universe is a vast obstacle course of catastrophic dangers. Asteroid impacts, supervolcanic eruptions, and global pandemics represent existential risks that could annihilate our species or irreversibly catapult us back into the Stone Age.

But nature is the least of our worries. Today’s greatest existential risks stem from advanced technologies like nuclear weapons, biotechnology, synthetic biology, nanotechnology, and even artificial superintelligence. These tools could trigger a disaster of unprecedented proportions. Exacerbating this situation are “threat multipliers” — issues like climate change and biodiveristy loss, which, while devastating in their own right, can also lead to an escalation of terrorism, pandemics, famines, and potentially even the use of WTDs (weapons of total destruction).

The good news is that none of these existential threats are inevitable. Humanity can overcome every single known danger. But accomplishing this will require the smartest groups working together for the common good of human survival.

So, how do we ensure that we have the smartest groups working to solve the problem?

Get women involved.

A 2010 study, published in Science, made two unexpected discoveries. First, it established that groups can exhibit a collective intelligence (or c factor). Most of us are familiar with general human intelligence, which describes a person’s intelligence level across a broad spectrum of cognitive tasks. It turns out groups also have a similar “collective” intelligence that determines how successfully they can navigate these cognitive tasks. This is an important finding because “research, management, and many other kinds of tasks are increasingly accomplished by groups — working both face-to-face and virtually.” To optimize group performance, we need to understand what makes a group more intelligent.

This leads to the second unexpected discovery. Intuitively, one might think that groups with really smart members will themselves be really smart. This is not the case. The researchers found no strong correlation between the average intelligence of members and the collective intelligence of the group. Similarly, one might suspect that the group’s IQ will increase if a member of the group has a particularly high IQ. Surely a group with Noam Chomsky will perform better than one in which he’s replaced by Joe Schmo. But again, the study found no strong correlation between the smartest person in the group and the group’s collective smarts.

Instead, the study found three factors linked to group intelligence. The first pertains to the “social sensitivity” of group members, measured by the “Reading the Mind in the Eyes” test. This term refers to one’s ability to infer the emotional states of others by picking up on certain non-verbal clues. The second concerns the number of speaking turns taken by members of the group. “In other words,” the authors write, “groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking.”

The last factor relates to the number of female members: the more women in the group, the higher the group’s IQ. As the authors of the study explained, “c was positively and significantly correlated with the proportion of females in the group.” If you find this surprising, you’re not alone: the authors themselves didn’t anticipate it, nor were they looking for a gender effect.

Why do women make groups smarter? The authors suggest that it’s because women are, generally speaking, more socially sensitive than men, and the link between social sensitivity and collective intelligence is statistically significant.

Another possibility is that men tend to dominate conversations more than women, which can disrupt the flow of turn-taking. Multiple studies have shown that women are interrupted more often than men; that when men interrupt women, it’s often to assert dominance; and that men are more likely to monopolize professional meetings. In other words, there’s robust empirical evidence for what the writer and activist Rebecca Solnit describes as “mansplaining.”

These data have direct implications for existential riskology:

Given the unique, technogenic dangers that haunt the twenty-first century, we need the smartest groups possible to tackle the problems posed by existential risks. We need groups comprised of women.

Yet the existential risk community is marked by a staggering imbalance of gender participation. For example, a random sample of 40 members of the “Existential Risk” group on Facebook (of which I am an active member) included only 3 women. Similar asymmetries can be found in many of the top research institutions working on global challenges.

This dearth of female scholars constitutes an existential emergency. If the studies above are correct, then the groups working on existential risk issues are not nearly as intelligent as they could be.

The obvious next question is: How can the existential risk community rectify this potentially dangerous situation? Some answers are implicit in the data above: for example, men could make sure that women have a voice in conversations, aren’t interrupted, and don’t get pushed to the sidelines in conversations monopolized by men.

Leaders of existential risk studies should also strive to ensure that women are adequately represented at conferences, that their work is promoted to the same extent as men’s, and that the environments in which existential risk scholarship takes place is free of discrimination. Other factors that have been linked to women avoiding certain fields include the absence of visible role models, the pernicious influence of gender stereotypes, the onerous demands of childcare, a lack of encouragement, and the statistical preference of women for professions that focus on “people” rather than “things.”

No doubt there are other factors not mentioned, and other strategies that could be identified. What can those of us already ensconced in the field do to achieve greater balance? What changes can the community make to foster more diversity? How can we most effectively maximize the collective intelligence of teams working on existential risks?

As Sir Martin Rees writes in Our Final Hour, “what happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.” Future generations may very well thank us for taking the link between collective intelligence and female participation seriously.

Note: there’s obviously a moral argument for ensuring that women have equal opportunities, get paid the same amount as men, and don’t have to endure workplace discrimination. The point of this article is to show that even if one brackets moral considerations, there are still compelling reasons for making the field more diverse. (For more , see chapter 14 of my book, which  lays out a similar argument.

The Vicious Cycle of Ocean Currents and Global Warming: Slowing Thermohaline Circulation

The world’s oceans play a major role in mitigating the greenhouse effect, as they absorb roughly a quarter of all carbon dioxide (CO2) emissions. As this atmospheric CO2 mixes with the ocean’s surface, it forms carbonic acid, and when carbon uptake occurs on a massive scale—as it has for the past few decades—the ocean acidifies. Coral reefs and shell-forming animals are especially susceptible to overly acidic water, and their possible extinction has led to the most vocal concerns about CO2 in the ocean.

Yet despite fears that much of today’s marine life could go extinct, this process of carbon uptake in the oceans could result in an even more disturbing cycle: increased atmospheric CO2 could stall ocean currents that are essential to maintaining global temperatures, thus accelerating global warming.

Warm salt water travels north from the South Atlantic Ocean to the Arctic where it cools, becomes more saline, sinks and travels back south. This process is known as thermohaline circulation, and it moves an enormous amount of heat through the Atlantic Ocean, maintaining present climates. The Gulf Stream is the most well-known ocean current, but NASA has created a helpful global animation of the entire process of thermohaline circulation.

Today, increasing levels of carbon dioxide absorption in the Atlantic Ocean threaten to slow these important currents and endanger the ocean’s ability to absorb our emissions.

Yet this is a threat that has been recognized for at least twenty years.  In 1996, researchers Jorge Sarmiento and Corinne Le Quere found that ocean warming weakens thermohaline circulation. They concluded that this “weakened circulation reduces the ability of the ocean to absorb carbon dioxide, making the climate system even less forgiving of human emissions.” A year later, climate scientist Stefan Rahmstorf sought to understand the effects of doubling atmospheric carbon dioxide on the strength of thermohaline circulation. He looked at multiple model scenarios and discovered that thermohaline circulation could decrease by 20% to as much as 50%.

These findings suggest that if we continue to emit carbon dioxide on a large scale, we may soon be unable to rely on the ocean’s buffering capacity to mitigate our greenhouse effect.

Now, if the oceans, specifically the Atlantic Ocean, lose their ability to absorb massive amounts of carbon dioxide, presumably the process of ocean acidification will slow down, as well. But while this is a positive consequence of the ocean’s diminishing buffering capacity, it comes as a package deal with an increased level of carbon dioxide lingering in the atmosphere, augmenting the greenhouse effect. Nature might self-equilibrate to the benefit of coral reefs and shell-forming marine life, but scientists fear that the resulting increase of atmospheric carbon dioxide will further diminish thermohaline circulation and escalate the problem of global warming. This would lead to rising ocean temperatures and more Arctic ice melting.

Complicating this web of causes and effects, when more Arctic ice melts, it freshens the incoming salt water. As explained by Chris Mooney of the Washington Post, “if the water is less salty it will also be less dense, reducing its tendency to sink below the surface. This could slow or even eventually shut down the circulation.” This consequence feeds a cycle that decreases the buffering capacity of the oceans and raises ocean temperatures. Further complicating this relationship, a 2013 report on ocean acidification by the Congressional Research Service noted that “all gases, including CO2, are less soluble in water as temperature increases.” Thus, it seems inevitable that the oceans will become worse at absorbing carbon dioxide, that thermohaline circulation will diminish further, and that global warming will accelerate.

One can begin to see the multifaceted positive feedback cycle at work here. As we emit more carbon dioxide into the atmosphere, ocean temperature rises, Arctic ice melts, thermohaline circulation slows, and the ocean’s capacity to absorb carbon dioxide diminishes. This allows more carbon dioxide to enter the atmosphere, which causes the ocean temperatures to rise faster, the ice to melt faster, thermohaline circulation to slow further, and the ocean’s capacity to absorb carbon dioxide to diminish further. The process threatens to continue ad infinitum if we don’t cut carbon emissions. Sarmiento and Le Quere concluded their study with a warning: “the magnitude of future CO2 responses to such changes would be greatly magnified because of the reduced buffering capacity of the oceans under increased atmospheric CO2.”

This disturbing cycle highlights the ocean’s integral role in mitigating global warming, and makes it all the more urgent to find practical ways to cut carbon dioxide emissions. As complex and interconnected as this web of causes and effects is, carbon dioxide emissions are undeniably the root cause. While scientists and policy advisors have understood the dangers of carbon dioxide emissions for years, a deeper understanding of the ocean’s relationship with carbon dioxide offers further evidence of the need to begin limiting emissions now.

Biodiversity Loss: An Existential Risk Comparable to Climate Change

Photo courtesy Audrey DeRose-Wilson

Piping Plovers are one of many endangered bird species in North America. Photo courtesy Audrey DeRose-Wilson

The following article was originally posted in the Bulletin of Atomic Scientists.

According to the Bulletin of Atomic Scientists, the two greatest existential threats to human civilization stem from climate change and nuclear weapons. Both pose clear and present dangers to the perpetuation of our species, and the increasingly dire climate situation and nuclear arsenal modernizations in the United States and Russia were the most significant reasons why the Bulletin decided to keep the Doomsday Clock set at three minutes before midnight earlier this year.

But there is another existential threat that the Bulletin overlooked in its Doomsday Clock announcement: biodiversity loss. This phenomenon is often identified as one of the many consequences of climate change, and this is of course correct. But biodiversity loss is also a contributing factor behind climate change. For example, deforestation in the Amazon rainforest and elsewhere reduces the amount of carbon dioxide removed from the atmosphere by plants, a natural process that mitigates the effects of climate change. So the causal relation between climate change and biodiversity loss is bidirectional.

Furthermore, there are myriad phenomena that are driving biodiversity loss in addition to climate change. Other causes include ecosystem fragmentation, invasive species, pollution, oxygen depletion caused by fertilizers running off into ponds and streams, overfishing, human overpopulation, and overconsumption. All of these phenomena have a direct impact on the health of the biosphere, and all would conceivably persist even if the problem of climate change were somehow immediately solved.

Such considerations warrant decoupling biodiversity loss from climate change, because the former has been consistently subsumed by the latter as a mere effect. Biodiversity loss is a distinct environmental crisis with its own unique syndrome of causes, consequences, and solutions—such as restoring habitats, creating protected areas (“biodiversity parks”), and practicing sustainable agriculture.

Deforestation of the Amazon rainforest decreases natural mitigation of CO2.

Deforestation of the Amazon rainforest decreases natural mitigation of CO2 and destroys the habitats of many endangered species.

The sixth extinction.

The repercussions of biodiversity loss are potentially as severe as those anticipated from climate change, or even a nuclear conflict. For example, according to a 2015 study published in Science Advances, the best available evidence reveals “an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way.” This conclusion holds, even on the most optimistic assumptions about the background rate of species losses and the current rate of vertebrate extinctions. The group classified as “vertebrates” includes mammals, birds, reptiles, fish, and all other creatures with a backbone.

The article argues that, using its conservative figures, the average loss of vertebrate species was 100 times higher in the past century relative to the background rate of extinction. (Other scientists have suggested that the current extinction rate could be as much as 10,000 times higher than normal.) As the authors write, “The evidence is incontrovertible that recent extinction rates are unprecedented in human history and highly unusual in Earth’s history.” Perhaps the term “Big Six” should enter the popular lexicon—to add the current extinction to the previous “Big Five,” the last of which wiped out the dinosaurs 66 million years ago.

But the concept of biodiversity encompasses more than just the total number of species on the planet. It also refers to the size of different populations of species. With respect to this phenomenon, multiple studies have confirmed that wild populations around the world are dwindling and disappearing at an alarming rate. For example, the 2010 Global Biodiversity Outlook report found that the population of wild vertebrates living in the tropics dropped by 59 percent between 1970 and 2006.

The report also found that the population of farmland birds in Europe has dropped by 50 percent since 1980; bird populations in the grasslands of North America declined by almost 40 percent between 1968 and 2003; and the population of birds in North American arid lands has fallen by almost 30 percent since the 1960s. Similarly, 42 percent of all amphibian species (a type of vertebrate that is sometimes called an “ecological indicator”) are undergoing population declines, and 23 percent of all plant species “are estimated to be threatened with extinction.” Other studies have found that some 20 percent of all reptile species, 48 percent of the world’s primates, and 50 percent of freshwater turtles are threatened. Underwater, about 10 percent of all coral reefs are now dead, and another 60 percent are in danger of dying.

Consistent with these data, the 2014 Living Planet Report shows that the global population of wild vertebrates dropped by 52 percent in only four decades—from 1970 to 2010. While biologists often avoid projecting historical trends into the future because of the complexity of ecological systems, it’s tempting to extrapolate this figure to, say, the year 2050, which is four decades from 2010. As it happens, a 2006 study published in Science does precisely this: It projects past trends of marine biodiversity loss into the 21st century, concluding that, unless significant changes are made to patterns of human activity, there will be virtually no more wild-caught seafood by 2048.

48% of the world's primates are threatened with extinction.

48% of the world’s primates are threatened with extinction.

Catastrophic consequences for civilization.

The consequences of this rapid pruning of the evolutionary tree of life extend beyond the obvious. There could be surprising effects of biodiversity loss that scientists are unable to fully anticipate in advance. For example, prior research has shown that localized ecosystems can undergo abrupt and irreversible shifts when they reach a tipping point. According to a 2012 paper published in Nature, there are reasons for thinking that we may be approaching a tipping point of this sort in the global ecosystem, beyond which the consequences could be catastrophic for civilization.

As the authors write, a planetary-scale transition could precipitate “substantial losses of ecosystem services required to sustain the human population.” An ecosystem service is any ecological process that benefits humanity, such as food production and crop pollination. If the global ecosystem were to cross a tipping point and substantial ecosystem services were lost, the results could be “widespread social unrest, economic instability, and loss of human life.” According to Missouri Botanical Garden ecologist Adam Smith, one of the paper’s co-authors, this could occur in a matter of decades—far more quickly than most of the expected consequences of climate change, yet equally destructive.

Biodiversity loss is a “threat multiplier” that, by pushing societies to the brink of collapse, will exacerbate existing conflicts and introduce entirely new struggles between state and non-state actors. Indeed, it could even fuel the rise of terrorism. (After all, climate change has been linked to the emergence of ISIS in Syria, and multiple high-ranking US officials, such as former US Defense Secretary Chuck Hagel and CIA director John Brennan, have affirmed that climate change and terrorism are connected.)

The reality is that we are entering the sixth mass extinction in the 3.8-billion-year history of life on Earth, and the impact of this event could be felt by civilization “in as little as three human lifetimes,” as the aforementioned 2012 Nature paper notes. Furthermore, the widespread decline of biological populations could plausibly initiate a dramatic transformation of the global ecosystem on an even faster timescale: perhaps a single human lifetime.

The unavoidable conclusion is that biodiversity loss constitutes an existential threat in its own right. As such, it ought to be considered alongside climate change and nuclear weapons as one of the most significant contemporary risks to human prosperity and survival.


Overfishing has left Bluefin Tuna an endangered species.

Climate Change for the Impatient: A Nuclear Mini Ice Age

Everyone has heard about climate change caused by fossil fuels, which threatens to raise Earth’s average surface temperature by about 3-5°C by the year 2100 unless we take major steps toward mitigation. But there’s an eerie silence about the other major climate change threat, which might lower Earth’s average surface temperature by 7°C: a decade-long mini ice age caused by a U.S.-Russia nuclear war.

This is colder than the 5°C cooling we endured 20,000 years ago during the last ice age. The good news is that, according to state-of-the-art climate models by Alan Robock at Rutgers University, a nuclear mini ice age would be rather brief, with about half of the cooling gone after a decade. The bad news is that this more than long enough for most people on Earth to starve to death if farming collapses. Robock’s all-out-war scenario shows cooling by about 20°C (36°F) in much of the core farming regions of the U.S., Europe, Russia and China (by 35°C in parts of Russia) for the first two summers — you don’t need to be a master farmer to figure out what freezing summers would do to food supply. It’s hard to predict exactly how devastating this famine would be if thousands of Earth’s largest cities were reduced to rubble and global infrastructure collapsed, but whatever small fraction of all humans don’t succumb to starvation, hypothermia or epidemics would need to cope with roving, armed gangs desperate for food.

What a nuclear mini ice age might look like.

Average cooling (in °C) during the first two summers after a full-scale nuclear war between the US and Russia (from Robock et al 2007).

Unless we take stronger action than there’s current political will for, we’re likely to face both dramatic fossil-fuel climate change and dramatic nuclear climate change within a century, give or take. Since no politician in their right mind would launch global nuclear Armageddon on purpose, the nuclear war triggering the mini ice age will most likely start by accident or miscalculation. This has has almost happened many times in the past, as this timeline shows. The annual probability of accidental nuclear war is poorly known, but it certainly isn’t zero: John F. Kennedy estimated the probability of  the Cuban Missile Crisis escalating to war between 33 percent and 50 percent. We know that near-misses keep occurring regularly, and there are probably many more close calls than haven’t been declassified. Simple math shows that even if the annual risk of global nuclear war is as low as 1 percent, we’ll probably have one within a century and almost certainly within a few hundred years. We just don’t know exactly when — it could be the day your great granddaughter gets married, or it could be next Tuesday when the Russian early-warning system suffers an unfortunate technical malfunction.

The science behind nuclear climate change is rather simple. Smoke from small fires doesn’t rise as high as the highest rain clouds, so rain washes the smoke away before too long. In contrast, massive firestorms from burning nuked cities can rise into the upper stratosphere, many times higher than commercial jet planes fly. There are no clouds that high (have you ever seen a cloud above you when peering out of your plane window at cruising altitude?), and for this reason, the firestorm smoke never gets rained out. Moreover, this smoke absorbs sunlight and heats up, allowing it to get lofted to even higher altitudes where it might stay for approximately a decade, soon spreading around the globe to cover both the U.S. and Russia even if only one of the two got nuked. Since much of the solar heat absorbed by the smoke gets radiated back into space instead of warming the ground, nuclear winter ensues if there’s enough smoke.

Just as with fossil-fuel climate change, nuclear climate change involves interesting uncertainties that deserve further research. For example how much smoke gets lofted to various altitudes in different scenarios? But whereas fossil-fuel climate research gets significant funding and press coverage, nuclear climate change gets neither. Part of the reason is probably that we can already start seeing effects of fossil-fuel climate change, whereas nuclear climate change arrives like ketchup out of a shaken glass bottle: nothing, nothing, nothing, and then way more than you wanted.

We should start treating both kinds of climate change with comparable respect, since there’s currently no convincing scientific case for nuclear climate change being a negligible threat compared to fossil-fuel climate change: the size of the temperature change can be comparable, the time until it gets dramatic can be comparable, and the nuclear version might wreak even greater havoc than the fossil-fuel version by being less gradual and leaving society less time to adapt.

Nuclear climate change is better than its fossil-fuel cousin if you’re impatient and like instant gratification. To end on a positive note, nuclear climate change also has the advantage of being an easier problem to solve. Whereas halving carbon emissions is quite difficult to accomplish, halving expected nuclear climate change is as simple as halving nuclear arsenals. Many military analysts agree that 300-1000 nuclear weapons suffice for extremely effective deterrence, and all but two nuclear powers have chosen to stay below that range. Yet the U.S. and Russia are currently hoarding about 7,000 each, and appear to be starting a new nuclear arms race. The U.S. is planning to spend $4 million per hour for the next 30 years making its nukes more lethal, which even former Secretary of Defense William Perry argues will make us less safe. Trimming our nuclear excess could not only free up a trillion dollars for other spending, but would be a huge victory in our battle against climate change.

This post is part of a series produced by The Huffington Post and Future of Life Institute (FLI) on nuclear security. It was originally posted here.

Research and Communication to Help Avert Global Environmental Catastrophe

Actions may speak louder than words, but research and communication are critical to helping people understand what actions they can take to help stem some of the increasing climate risks. This week, three opportunities from three very different groups have appeared on our environmental radar, and they’re opportunities that most of our readership can and should try to take advantage of.

First is this new postdoctoral research opportunity at CSER. Dr. Seán Ó hÉigeartaigh describes it here:

“Having been pleased to see EA orgs like GWWC are continuing to do analysis of climate change and environmental risks, I thought I’d mention that we at CSER are working to set up research strands in environmental risks. We are excited to announce that CSER is hiring for a postdoc to work in the area of major ecological risks. Interested in the complex nature of ecological tipping points, and how they might result in catastrophic impacts? Extreme risks associated with escalating sea level rise? How climate change might threaten global food security? Or the risks and flow-on effects that climate change poses for (perhaps unidentified) keystone species?

We’re looking to hire a brilliant person who will bring their ideas to us; however, we have a particular interest in potentially catastrophic impacts resulting from the interplay between emerging ecological risks (and other factors e.g. sociopolitical) of different developments of concern in the environmental domain, such as the ideas above, as these reflect challenges identified by our advisers as being complex and poorly understood. There will also be a strong emphasis on translation of research into policy impacts, using the networks of CSER and its collaborators.

This first hire is likely to seed a broader programme in this space for us, in collaboration with a range of partners in Cambridge. Relevant disciplines might include: biology, ecology, conservation, mathematical modelling, planetary science, anthropology, psychology, human geography, decision and policy sciences.Please share the word as widely as possible! As Huw’s and my own networks are not primarily in environmental and climate risk, we are very grateful for the help of our colleagues and friends in reaching the right networks. For queries, please contact”

We encourage all qualified individuals who are interested in existential risk and climate change to apply.

We also recently received word of an essay competition hosted by FHI, which asks the question: “How could we feed everyone in the event that we experience a global crises in which there is a sudden reduction in agriculture?” There are multiple categories for which one can consider this question, and each category winner will receive $500. The overall, grand-prize winner for all of the categories will receive $2000. But write fast: this competition ends on April 30.

If you prefer fiction to nonfiction, there’s still a writing opportunity for you (though with a much tighter deadline than the essay). Sapiens Plurum is hosting an Earth Day Short Fiction Contest, for which contestants will submit a short story about how we can stem climate change and improve our environmental future. For this you need to write very fast: the deadline for submissions is this Friday, April 22. This competition offers three cash prizes, with a first place prize of $1000, so rushing to write a story might pay off in the end.

Whether you’d rather focus on research, writing, or both, there are plenty of opportunities to take action. Good luck!

X-risk News of the Week: Ocean Warming and Nuclear Protests

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

Ocean warming and nuclear weapons are this week’s big x-risk news. It’s pretty clear why nuclear weapons could pose an existential risk. But ocean warming?

This is where cause and effect becomes the big issue.

Oceans are like a carbon dioxide sponge. In fact, the oceans have absorbed about 30% of all the carbon dioxide emitted since the start of the industrial revolution. That’s hundreds of billions of tons of carbon dioxide, and it’s taking a heavy toll on the health of the oceans.

As the news this week pointed out, acidification of the Great Barrier Reef (GBR) — something marine scientists have been warning the public about for decades – is occurring even faster than scientists previously thought. Thousands of species of marine life depend on the GBR and could be at risk. From a human standpoint, the GBR generates over $5.5 billion in revenue each year, employing nearly 70,000 people.

On a larger scale, more news came out this week about just how fast the ocean is changing as a result of climate change. In one article, Science News pointed out that in the last century, sea levels have risen faster than at any other time since the founding of Rome, 2800 years ago.

Meanwhile, the Guardian reported on a new study that found that even if we stem the rising global temperatures, ocean levels will continue to rise at a rapid rate because the ice sheets will continue to melt just at the current temperatures. Even if we do everything perfectly and don’t let global temperatures rise any higher than two degrees, ocean levels are still expected to rise 30 feet. According to the Guardian:

“20% of the world’s population will eventually have to migrate away from coasts swamped by rising oceans. Cities including New York, London, Rio de Janeiro, Cairo, Calcutta, Jakarta and Shanghai would all be submerged.”

In the United States alone, work associated with the oceans contributes billions of dollars to the U.S. GDP, while providing employment to millions of people. Worldwide, the oceans provide employment for an estimated 10-12% of the global population.

Now, imagine the global impact of a mass migration from some of the largest cities around the world, while at the same time, huge percentages of the global population lose their means of food and income. This is a problem that could easily escalate.

As these types of issues get worse and likely increase political strain between countries, the risks associated with nuclear weapons could also increase. More and more reports have been coming out in recent months, indicating that the risk of a nuclear war could be reaching levels not seen since the Cold War.

This weekend, thousands of protesters took to the streets, marching through London in what has been called Britain’s largest anti-nuclear weapons rally in a generation. The march was organized by the Campaign for Nuclear Disarmament, and featured vocal nuclear weapons opponents like Britain’s Labour Party leader, Jeremy Corbyn.

Among the biggest risks we face with nuclear weapons is that they might get triggered accidentally or in response to a false alarm. This week we also launched timeline of known close calls to show just how easy it would be for a nuclear war to be waged inadvertently.

Our hope is that the more well informed people are, the more likely they are to take the steps necessary to mitigate these risks. Nuclear weapons may represent an existential risk, but people taking action is among the greatest of existential hopes.

X-risk News of the Week: Nuclear Winter and a Government Risk Report

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

The big news this week landed squarely in the x-risk end of the spectrum.

First up was a New York Times op-ed titled, Let’s End the Peril of a Nuclear Winter, and written by climate scientists, Drs. Alan Robock and Owen Brian Toon. In it, they describe the horrors of nuclear winter — the frigid temperatures, the starvation, and the mass deaths — that could terrorize the entire world if even a small nuclear war broke out in one tiny corner of the globe.

Fear of nuclear winter was one of the driving forces that finally led leaders of Russia and the US to agree to reduce their nuclear arsenals, and concerns about nuclear war subsided once the Cold War ended. However, recently, leaders of both countries have sought to strengthen their arsenals, and the threat of a nuclear winter is growing again. While much of the world struggles to combat climate change, the biggest risk could actually be that of plummeting temperatures if a nuclear war were to break out.

In an email to FLI, Robock said:

“Nuclear weapons are the greatest threat that humans pose to humanity.  The current nuclear arsenal can still produce nuclear winter, with temperatures in the summer plummeting below freezing and the entire world facing famine.  Even a ‘small’ nuclear war, using less than 1% of the current arsenal, can produce starvation of a billion people.  We have to solve this problem so that we have the luxury of addressing global warming.


Also this week, the Senate Armed Services Committee, led by James Clapper, released the Worldwide Threat Assessment of the US Intelligence Community for 2016. The document is 33 pages of potential problems the government is most concerned about in the coming year, a few of which can fall into the category of existential risks:

  1. The Internet of Things (IoT). Though this doesn’t technically pose an existential risk, it does have the potential to impact quality of life and some of the freedoms we typically take for granted. The report states: “In the future, intelligence services might use the IoT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.”
  2. Artificial Intelligence. Clapper’s concerns are broad in this field. He argues: “Implications of broader AI deployment include increased vulnerability to cyberattack, difficulty in ascertaining attribution, facilitation of advances in foreign weapon and intelligence systems, the risk of accidents and related liability issues, and unemployment. […] The increased reliance on AI for autonomous decision making is creating new vulnerabilities to cyberattacks and influence operations. […] AI systems are susceptible to a range of disruptive and deceptive tactics that might be difficult to anticipate or quickly understand. Efforts to mislead or compromise automated systems might create or enable further opportunities to disrupt or damage critical infrastructure or national security networks.”
  3. Nuclear. Under the category of Weapons of Mass Destruction (WMD), Clapper dedicated the most space to concerns about North Korea’s nuclear weapons. However he also highlighted concerns about China’s work to modernize its nuclear weapons, and he argues that Russia violated the INF Treaty when they developed a ground-launch cruise missile.
  4. Genome Editing. Interestingly, gene editing was also listed in the WMD category. As Clapper explains, “Research in genome editing conducted by countries with different regulatory or ethical standards than those of Western countries probably increases the risk of the creation of potentially harmful biological agents or products.” Though he doesn’t explicitly refer to the CRISPR-Cas9 system, he does worry that the low cost and ease-of-use for new technologies will enable “deliberate or unintentional misuse” that could “lead to far reaching economic and national security implications.”

The report, though long, is an easy read, and it’s always worthwhile to understand what issues are motivating the government’s actions.


With our new series by Matt Scherer about the legal complications of some of the anticipated AI and autonomous weapons developments, the big news should have been about all of the headlines this week that claimed the federal government now considers AI drivers to be real drivers. Scherer, however, argues this is bad journalism. He provides his interpretation of the NHTSA letter in his recent blog post, “No, the NHTSA did not declare that AIs are legal drivers.”


While the headlines of the last few days may have veered toward x-risk, this week also marks the start of the 30th annual Association for the Advancement of Artificial Intelligence (AAAI) Conference. For almost a week, AI researchers will convene in Phoenix to discuss their developments and breakthroughs, and on Saturday, FLI grantees will present some of their research at the AI Ethics and Society Workshop. This is expected to be an event full of hope and excitement about the future!


The Wisdom Race Is Heating Up

There’s a race going on that will determine the fate of humanity. Just as it’s easy to miss the forest for all the trees, however, it’s easy to miss this race for all the scientific news stories about breakthroughs and concerns. What do all these headlines from 2015 have in common?

“AI masters 49 Atari games without instructions”
“Self-driving car saves life in Seattle”
“Pentagon Seeks $12Bn for AI Weapons”
“Chinese Team Reports Gene-Editing Human Embryos”
“Russia building Dr. Strangelove’s Cobalt bomb”

They are all manifestations of the aforementioned race heating up: the race between the growing power of technology and the growing wisdom with which we manage it. The power is growing because our human minds have an amazing ability to understand the world and to convert this understanding into game-changing technology. Technological progress is accelerating for the simple reason that breakthroughs enable other breakthroughs: as technology gets twice as powerful, if can often be used to used to design and build technology that is twice as powerful in turn, triggering repeated capability doubling in the spirit of Moore’s law.

What about the wisdom ensuring that our technology is beneficial? We have technology to thank for all the ways in which today is better than the Stone Age, but this not only thanks to the technology itself but also thanks to the wisdom with which we use it. Our traditional strategy for developing such wisdom has been learning from mistakes: We invented fire, then realized the wisdom of having fire alarms and fire extinguishers. We invented the automobile, then realized the wisdom of having driving schools, seat belts and airbags.

In other words, it was OK for wisdom to sometimes lag behind in the race, because it would catch up when needed. With more powerful technologies such as nuclear weapons, synthetic biology and future strong artificial intelligence, however, learning from mistakes is not a desirable strategy: we want to develop our wisdom in advance so that we can get things right the first time, because that might be the only time we’ll have. In other words, we need to change our approach to tech risk from reactive to proactive. Wisdom needs to progress faster.

This year’s Edge Question “What is the most interesting recent news and what makes it important?” is cleverly ambiguous, and can be interpreted either as call to pick a news item or as asking about the very definition of “interesting and important news.” If we define “interesting” in terms of clicks and Nielsen ratings, then top candidates must involve sudden change of some sort, whether it be a discovery or a disaster. If we instead define “interesting” in terms of importance for the future of humanity, then our top list should include even developments too slow to meet journalist’s definition of “news,” such as “Globe keeps warming.” In that case, I’ll put the fact that the wisdom race is heating up at the very top of my list. Why?

From my perspective as a cosmologist, something remarkable has just happened: after 13.8 billion years, our universe has finally awoken, with small parts of it becoming self-aware, marveling at the beauty around them, and beginning to decipher how their universe works. We, these self-aware life forms, are using our new-found knowledge to build technology and modify our universe on ever grander scales.

This is one of those stories where we get to pick our own ending, and there are two obvious ones for humanity to choose between: either win the wisdom race and enable life to flourish for billions of years, or lose the race and go extinct. To me, the most important scientific news is that after 13.8 billion years, we finally get to decide—probably within centuries or even decades.

Since the decision about whether to win the race sounds like such a no-brainer, why are we still struggling with it? Why is our wisdom for managing technology so limited that we didn’t do more about climate change earlier, and have come close to accidental nuclear war over a dozen times? As Skype-founder Jaan Tallinn likes to point out, it is because our incentives drove us to a bad Nash equilibrium. Many of humanity’s most stubborn problems, from destructive infighting to deforestation, overfishing and global warming, have this same root cause: when everybody follows the incentives they are given, it results in a worse situation than cooperation would have enabled.

Understanding this problem is the first step toward solving it. The wisdom we need to avoid lousy Nash equilibria must be developed at least in part by the social sciences, to help create a society where individual incentives are aligned with the welfare of humanity as a whole, encouraging collaboration for the greater good. Evolution endowed us with compassion and other traits to foster collaboration, and when more complex technology made these evolved traits inadequate, our forebears developed peer pressure, laws and economic systems to steer their societies toward good Nash equilibria. As technology gets ever more powerful, we need ever stronger incentives for those who develop, control and use it to make its beneficial use their top priority.

Although the social sciences can help, plenty of technical work is needed as well in order to win the race. Biologists are now studying how to best deploy (or not) tools such as CRISPR genome editing. 2015 will be remembered as the year when the beneficial AI movement went mainstream, engendering productive symposia and discussions at all the largest AI-conferences. Supported by many millions of dollars in philanthropic funding, large numbers of AI-researchers around the world have now started researching the fascinating technical challenges involved in keeping future AI-systems beneficial. In other words, the laggard in the all-important wisdom race gained significant momentum in 2015! Let’s do all we can to make future top news stories be about wisdom winning the race, because then we all win.

This article was originally posted on in response to the question: “What do you consider the most interesting recent [scientific] news? What makes it important?”