Podcast: Could an Earthquake Destroy Humanity?

Earthquakes as Existential Risks

Earthquakes are not typically considered existential or even global catastrophic risks, and for good reason: they’re localized events. While they may be devastating to the local community, rarely do they impact the whole world. But is there some way an earthquake could become an existential or catastrophic risk? Could a single earthquake put all of humanity at risk? In our increasingly connected world, could an earthquake sufficiently exacerbate a biotech, nuclear or economic hazard, triggering a cascading set of circumstances that could lead to the downfall of modern society?

Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of FLI consider extreme earthquake scenarios to figure out if there’s any way such a risk is remotely plausible. This podcast was produced in a similar vein to Myth Busters and xkcd’s What If series.

We only consider a few scenarios in this podcast, but we’d love to hear from other people. Do you have ideas for an extreme situation that could transform a locally devastating earthquake into a global calamity?

This episode features insight from seismologist Martin Chapman of Virginia Tech.

Op-ed: Climate Change Is the Most Urgent Existential Risk

Climate change and biodiversity loss may pose the most immediate and important threat to human survival given their indirect effects on other risk scenarios.

Humanity faces a number of formidable challenges this century. Threats to our collective survival stem from asteroids and comets, supervolcanoes, global pandemics, climate change, biodiversity loss, nuclear weapons, biotechnology, synthetic biology, nanotechnology, and artificial superintelligence.

With such threats in mind, an informal survey conducted by the Future of Humanity Institute placed the probability of human extinction this century at 19%. To put this in perspective, it means that the average American is more than a thousand times more likely to die in a human extinction event than a plane crash.*

So, given limited resources, which risks should we prioritize? Many intellectual leaders, including Elon Musk, Stephen Hawking, and Bill Gates, have suggested that artificial superintelligence constitutes one of the most significant risks to humanity. And this may be correct in the long-term. But I would argue that two other risks, namely climate change and biodiveristy loss, should take priority right now over every other known threat.

Why? Because these ongoing catastrophes in slow-motion will frame our existential predicament on Earth not just for the rest of this century, but for literally thousands of years to come. As such, they have the capacity to raise or lower the probability of other risks scenarios unfolding.

Multiplying Threats

Ask yourself the following: are wars more or less likely in a world marked by extreme weather events, megadroughts, food supply disruptions, and sea-level rise? Are terrorist attacks more or less likely in a world beset by the collapse of global ecosystems, agricultural failures, economic uncertainty, and political instability?

Both government officials and scientists agree that the answer is “more likely.” For example, the current Director of the CIA, John Brennan, recently identified “the impact of climate change” as one of the “deeper causes of this rising instability” in countries like Syria, Iraq, Yemen, Libya, and Ukraine. Similarly, the former Secretary of Defense, Chuck Hagel, has described climate change as a “threat multiplier” with “the potential to exacerbate many of the challenges we are dealing with today — from infectious disease to terrorism.”

The Department of Defense has also affirmed a connection. In a 2015 report, it states, “Global climate change will aggravate problems such as poverty, social tensions, environmental degradation, ineffectual leadership and weak political institutions that threaten stability in a number of countries.”

Scientific studies have further shown a connection between the environmental crisis and violent conflicts. For example, a 2015 paper in the Proceedings of the National Academy of Sciences argues that climate change was a causal factor behind the record-breaking 2007-2010 drought in Syria. This drought led to a mass migration of farmers into urban centers, which fueled the 2011 Syrian civil war. Some observers, including myself, have suggested that this struggle could be the beginning of World War III, given the complex tangle of international involvement and overlapping interests.

The study’s conclusion is also significant because the Syrian civil war was the Petri dish in which the Islamic State consolidated its forces, later emerging as the largest and most powerful terrorist organization in human history.

A Perfect Storm

The point is that climate change and biodiversity loss could very easily push societies to the brink of collapse. This will exacerbate existing geopolitical tensions and introduce entirely new power struggles between state and nonstate actors. At the same time, advanced technologies will very likely become increasingly powerful and accessible. As I’ve written elsewhere, the malicious agents of the future will have bulldozers rather than shovels to dig mass graves for their enemies.

The result is a perfect storm of more conflicts in the world along with unprecedentedly dangerous weapons.

If the conversation were to end here, we’d have ample reason for placing climate change and biodiversity loss at the top of our priority lists. But there are other reasons they ought to be considered urgent threats. I would argue that they could make humanity more vulnerable to a catastrophe involving superintelligence and even asteroids.

The basic reasoning is the same for both cases. Consider superintelligence first. Programming a superintelligence whose values align with ours is a formidable task even in stable circumstances. As Nick Bostrom argues in his 2014 book, we should recognize the “default outcome” of superintelligence to be “doom.”

Now imagine trying to solve these problems amidst a rising tide of interstate wars, civil unrest, terrorist attacks, and other tragedies? The societal stress caused by climate change and biodiversity loss will almost certainly compromise important conditions for creating friendly AI, such as sufficient funding, academic programs to train new scientists, conferences on AI, peer-reviewed journal publications, and communication/collaboration between experts of different fields, such as computer science and ethics.

It could even make an “AI arms race” more likely, thereby raising the probability of a malevolent superintelligence being created either on purpose or by mistake.

Similarly, imagine that astronomers discover a behemoth asteroid barreling toward Earth. Will designing, building, and launching a spacecraft to divert the assassin past our planet be easier or more difficult in a world preoccupied with other survival issues?

In a relatively peaceful world, one could imagine an asteroid actually bringing humanity together by directing our attention toward a common threat. But if the “conflict multipliers” of climate change and biodiversity loss have already catapulted civilization into chaos and turmoil, I strongly suspect that humanity will become more, rather than less, susceptible to dangers of this sort.

Context Risks

We can describe the dual threats of climate change and biodiversity loss as “context risks.” Neither is likely to directly cause the extinction of our species. But both will define the context in which civilization confronts all the other threats before us. In this way, they could indirectly contribute to the overall danger of annihilation — and this worrisome effect could be significant.

For example, according to the Intergovernmental Panel on Climate Change, the effects of climate change will be “severe,” “pervasive,” and “irreversible.” Or, as a 2016 study published in Nature and authored by over twenty scientists puts it, the consequences of climate change “will extend longer than the entire history of human civilization thus far.”

Furthermore, a recent article in Science Advances confirms that humanity has already escorted the biosphere into the sixth mass extinction event in life’s 3.8 billion year history on Earth. Yet another study suggests that we could be approaching a sudden, irreversible, catastrophic collapse of the global ecosystem. If this were to occur, it could result in “widespread social unrest, economic instability and loss of human life.”

Given the potential for environmental degradation to elevate the likelihood of nuclear wars, nuclear terrorism, engineered pandemics, a superintelligence takeover, and perhaps even an impact winter, it ought to take precedence over all other risk concerns — at least in the near-term. Let’s make sure we get our priorities straight.

* How did I calculate this? First, the average American’s lifetime chance of dying in an “Air and space transport accident” was 1 in 9737 as of 2013, according to the Insurance Information Institute. The US life expectancy is currently 78.74 years, which we can round up to 80 years for simplicity. Second, the informal Future of Humanity Institute (FHI) survey puts the probability of human extinction this century at 19%. Assuming independence, it follows that the probability of human extinction in an 80-year period (the US life expectancy) is 15.5%. Finally, the last step is to figure out the difference between the 15.5% figure and the 1 in 9737 statistic. To do this, divide .155 by 1/9737. This gives 1509.235. And from here we can conclude that, if the FHI survey is accurate, “the average American is more than a thousand times more likely to die in a human extinction event than a plane crash.”

Op-Ed: If AI Systems Can Be “Persons,” What Rights Should They Have?

The last segment in this series noted that corporations came into existence and were granted certain rights because society believed it would be economically and socially beneficial to do so.  There has, of course, been much push-back on that front.  Many people both inside and outside of the legal world ask if we have given corporations too many rights and treat them a little too much like people.  So what rights and responsibilities should we grant to AI systems if we decide to treat them as legal “persons” in some sense?

Uniquely in this series, this post will provide more questions than answers.  This is in part because the concept of “corporate personhood” has proven to be so malleable over the years.  Even though corporations are the oldest example of artificial “persons” in the legal world, we still have not decided with any firmness what rights and responsibilities a corporation should have.  Really, I can think of only one ground rule for legal “personhood”: “personhood” in a legal sense requires, at a minimum, the right to sue and the ability to be sued.  Beyond that, the meaning of “personhood” has proven to be pretty flexible.  That means that for the most part, we should be able decide the rights and responsibilities included within the concept of AI personhood on a right-by-right and responsibility-by-responsibility basis.

Read more

Congress Subpoenas Climate Scientists in Effort to Hamper ExxonMobil Fraud Investigation

ExxonMobil executives may have intentionally misled the public about climate change – for decades. And the House Science Committee just hampered legal efforts to learn more about ExxonMobil’s actions by subpoenaing the nonprofit scientists who sought to find out what the fossil fuel giant knew and when.

For 40 years, tobacco companies intentionally misled consumers to believe that smoking wasn’t harmful. Now it appears that many in the fossil fuel industry may have applied similarly deceptive tactics – and for just as long – to confuse the public about the dangers of climate change.

Investigative research by nonprofit groups like InsideClimate News and the Union of Concerned Scientists (UCS) have turned up evidence that ExxonMobil may have known about the hazards of fossil-fuel driven climate change back in the 1970s. However, rather than informing the public or taking steps to reduce such risks, documents indicate that ExxonMobil leadership chose to cover up their findings and instead convince the public that climate science couldn’t be trusted.

As a result of these findings, the Attorneys General (AGs) from New York and Massachusetts launched a legal investigation to determine if ExxonMobil committed fraud, including subpoenaing the company for more information. That’s when the House Science, Space and Technology Committee Chairman Lamar Smith stepped in.

Chairman Smith, under powerful new House rules, unilaterally subpoenaed not just the AGs, but also many of the nonprofits involved in the ExxonMobil investigation, including groups like the UCS. Smith and other House representatives argue that they’re merely supporting ExxonMobil’s rights to free speech and to form opinions based on scientific research.

However, no one is targeting ExxonMobil for expressing an opinion. The Attorneys General and the nonprofits are investigating what may have been intentional fraud.

In a public statement, Ken Kimmell, president of the Union of Concerned Scientists said:

“We do not accept Chairman Smith’s premise that fraud, if committed by ExxonMobil, is protected by the First Amendment. It’s beyond ironic for Chairman Smith to violate our actual free speech rights in the name of protecting ExxonMobil’s supposed right to misrepresent the work of its own scientists and deceive shareholders and the public. […]

“Smith is misusing the House Science Committee’s subpoena power in a way that should concern everyone across the political spectrum. Today, the target is UCS and others concerned about climate change. But if these kinds of subpoenas are allowed, who will be next and on what basis?”

In fact, Chairman Smith also subpoenaed climate scientists at the National Ocean and Atmospheric Administration (NOAA) in the fall of 2015 and again earlier this year. UCS representatives are referring to this as a blatant “abuse of power” on the part of the government and ExxonMobil.

Gretchen Goldman, a lead analyst for UCS, wrote: “Abuse of power is when a company exploits its vast political network to squash policies that would address climate change.”

The complete list of nonprofits subpoenaed by Chairman Smith includes: 350.org, the Climate Accountability Institute, the Climate Reality Project, Greenpeace, Pawa Law Group PC, the Rockefeller Brothers Fund, the Rockefeller Family Fund, and the Union of Concerned Scientists.

Editorial note:

At FLI, we strive to remain nonpartisan and apolitical. Our goal — to ensure a bright future for humanity — clearly spans the political spectrum. However, we cannot, in good conscience, stand back and simply witness this political attack on science in silence. To understand and mitigate climate change, we need scientific research. We need political leaders to let scientists do their jobs without intimidation.

Op-ed: On Robot-delivered Bombs

“In An Apparent First, Police Used A Robot To Kill.”  So proclaimed a headline on NPR’s website, referring to the method Dallas police used to end the standoff with Micah Xavier Johnson.  Johnson, an army veteran, shot 12 police officers Thursday night, killing five of them.  After his attack, he holed himself up in a garage and told police negotiators that he would kill more officers in the final standoff.  As Dallas Police Chief David Brown said at a news conference on Friday morning, “[w]e saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the subject was.  Other options would have exposed our officers to grave danger.”

The media’s coverage of this incident generally has glossed over the nature of the “robot” that delivered the lethal bomb.  The robot was not an autonomous weapon system that operated free of human control. Rather, it was a remote-controlled bomb disposal robot–one that was sent, ironically, to deliver and detonate a bomb rather than to remove or defuse one.  Such a robot can be analogized to the unmanned aerial vehicles or “drones” that have seen increasing military and civilian use in recent years–there is a human somewhere who is controlling every significant aspect of the robot’s movements.

Legally, I don’t think the use of such a remote-controlled device to deliver lethal force presents any special challenges.  Because a human is always in control of the robot the lines of legal liability are no different than if the robot’s human operator had walked over and placed the bomb himself.  I don’t think that entering the command that led to the detonation of the bomb was any different from a legal standpoint than a sniper pulling the trigger on a rifle.  The accountability problems that arise with autonomous weapons simply are not present when lethal force is delivered by a remote-controlled device.

But that is not to say that there are no ethical challenges with police delivering lethal force remotely.  As with aerial drones, a bomb disposal robot can deliver lethal force without placing the humans making the decision to kill in danger.  The absence of risk creates a danger that the technology will be overused.

That issue has already been widely discussed in the context of military drones.  Military commanders think carefully before ordering pilots to fly into combat zones to conduct air strikes, because they know it will place those pilots at risk.  They presumably have less hesitation about ordering air strikes using drones, which would not place any of the men and women under their command in harm’s way.  That absence of physical risk may make the decision to use lethal force easier, as explained in a 2014 report produced by the Stimson Center on US Drone Policy:

The increasing use of lethal UAVs may create a slippery slope leading to continual or wider wars. The seemingly low-risk and low-cost missions enabled by UAV technologies may encourage the United States to fly such missions more often, pursuing targets with UAVs that would be deemed not worth pursuing if manned aircraft or special operation forces had to be put at risk. For similar reasons, however, adversarial states may be quicker to use force against American UAVs than against US manned aircraft or military personnel. UAVs also create an escalation risk insofar as they may lower the bar to enter a conflict, without increasing the likelihood of a satisfactory outcome.

The same concerns apply to the use of robo-bombs by police in civilian settings.  The exceptional danger that police faced in the Dallas standoff makes the use of robot-delivered force in that situation fairly reasonable.  But the concern is that police will be increasingly tempted to use the technology in less exceptional situations. As Ryan Calo said in the NPR story, “the time to get nervous about police use of robots isn’t in extreme, anomalous situations with few good options like Dallas, but if their use should become routine.”  The danger is that the low-risk nature of robot-delivered weapons makes it more likely that their use will become routine.

Of course, there is another side of that coin.  Human police officers facing physical danger, or even believing that they are facing such danger, can panic or overreact,.  That may lead them to use lethal force in situations where it is not warranted out of a sense of self-preservation. That may well have been what happened in the shooting of Philando Castile, whose tragic and unnecessary death at the hands of police apparently helped drive Micah Xavier Johnson to open fire on Dallas police officers.  A police officer controlling a drone or similar device from the safely of a control room will feel no similar compulsion to use lethal force for reasons of self-preservation.

Legally, I think that the bottom line should be this: police departments’ policies on the use of lethal force should be the same regardless of whether that force is delivered personally or remotely.  Many departments’ policies and standards have been under increased scrutiny due to the high-profile police shootings of the past few years, but the gist of those policies is still almost always some variation of: “police officers are not allowed to use lethal force unless they reasonably believe that the use of such force is necessary to prevent death or serious injury to the officer or a member of the public.”

I think that standard was met in Dallas.  And who knows?  Since the decision to use a robot-delivered bomb came about only because of the unique nature of the Dallas standoff, it’s possible that we won’t see another similar use of robots by police for years to come.  But if such an incident does happen again, we may look back on the grisly and dramatic end to the Dallas standoff as a turning point.

Op-ed: When NATO Countries Were U.S. Nuclear Targets

Sixty years ago, the U.S. had over 60 nuclear weapons aimed at Poland, ready to launch. At least one of those targeted Warsaw, where, on July 8-9, allied leaders will meet for the biennial NATO summit meeting.

In fact, recently declassified documents, reveal that the U.S. once had their nuclear sites set on over 270 targets scattered across various NATO countries. Most people assume that the U.S. no longer poses a nuclear threat to its own NATO allies, but that assumption may be wrong.

In 2012, Alex Wellerstein created an interactive program called NukeMap to help people visualize how deadly a nuclear weapon would be if detonated in any country of the world. He recently went a step further and ran models to see how far nuclear fallout might drift from its original target.

It turns out, if the U.S. – either unilaterally or with NATO – were to launch a nuclear attack against Russia, countries such as Finland, Estonia, Latvia, Belarus, Ukraine, and even Poland would be at severe risk of nuclear fallout. Similarly, attacks against China or North Korea would harm people in South Korea, Myanmar and Thailand.

Even a single nuclear weapon, detonated too close to a border, on a day that the wind is blowing in the wrong direction, would be devastating for innocent people living in nearby allied countries.

While older targeting data is declassified, today’s nuclear targets have shifted. And the public is kept in the dark about how many countries may be at risk of becoming collateral damage in the event of a nuclear attack anywhere in their region of the globe.

Most people believe that no leader would intentionally fire a nuke at another country. And perhaps no sane leader would intentionally do so – although that’s not something to count on as political tensions increase – but there’s a very good chance that one of the nuclear powers will accidentally launch a nuke in response to inaccurate data.

The accidental launch of a nuclear weapon is something that has almost happened many times in the past, and it only takes one nuclear weapon to kill hundreds of thousands of people. Yet almost 30 years after the Cold War ended, 15,000 nuclear weapons remain, with more than 90% of them split between the U.S. and Russia.

Meanwhile, relations are deteriorating between Russia, China, and the US/ NATO. This doesn’t just increase the risk of intentional nuclear war; it increases the likelihood that a country will misinterpret bad satellite or radar data and launch a retaliatory strike to a false alarm.

Many nuclear and military experts, including Former Secretary of Defense William Perry, warn that the threat of a nuclear attack is greater now than it was during the Cold War.

Major international developments have occurred in the two years since the last NATO meeting. In a recent op-ed in Newsweek, NATO president Jens Stoltenberg overviewed many of the problems that they must address:

“There is no denying that the world has become more dangerous in recent years. Moscow’s actions in Ukraine have shaken the European security order. Turmoil in the Middle East and North Africa has unleashed a host of challenges, not least the largest refugee and migrant crisis since the Second World War. We face security challenges of a magnitude and complexity much greater than only a few years ago. Add to that the uncertainty surrounding “Brexit”—the consequences of which are unclear—and it is easy to be concerned about the future.”

These are serious problems indeed, but 15,000 nuclear weapons in the hands of just a couple leaders only increases global destabilization. If NATO is serious about increasing security, then we must significantly decrease the number of nuclear weapons – and the number of nuclear targets ― around the world.

Deterrence is an important defensive posture, and this is not a call for NATO to encourage countries to eliminate all nuclear weapons. Instead, it is a reminder that we must learn from the past. Those who are enemies today could be friends in a safer, more stable future, but that hope is lost if a nuclear war ever occurs.

The Evolution of AI: Can Morality be Programmed?

The following article was originally posted on Futurism.com.

Recent advances in artificial intelligence have made it clear that our computers need to have a moral code. Disagree? Consider this: A car is driving down the road when a child on a bicycle suddenly swerves in front of it. Does the car swerve into an oncoming lane, hitting another car that is already there? Does the car swerve off the road and hit a tree? Does it continue forward and hit the child?

Each solution comes with a problem: It could result in death.

It’s an unfortunate scenario, but humans face such scenarios every day, and if an autonomous car is the one in control, it needs to be able to make this choice. And that means that we need to figure out how to program morality into our computers.

Vincent Conitzer, a Professor of Computer Science at Duke University, recently received a grant from the Future of Life Institute in order to try and figure out just how we can make an advanced AI that is able to make moral judgments…and act on them.

MAKING MORALITY

At first glance, the goal seems simple enough—make an AI that behaves in a way that is ethically responsible; however, it’s far more complicated than it initially seems, as there are an amazing amount of factors that come into play. As Conitzer’s project outlines, “moral judgments are affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems.”

That’s what we’re trying to do now.

In a recent interview with Futurism, Conitzer clarified that, while the public may be concerned about ensuring that rogue AI don’t decide to wipe-out humanity, such a thing really isn’t a viable threat at the present time (and it won’t be for a long, long time). As a result, his team isn’t concerned with preventing a global-robotic-apocalypse by making selfless AI that adore humanity. Rather, on a much more basic level, they are focused on ensuring that our artificial intelligence systems are able to make the hard, moral choices that humans make on a daily basis.

So, how do you make an AI that is able to make a difficult moral decision?

Conitzer explains that, to reach their goal, the team is following a two path process: Having people make ethical choices in order to find patterns and then figuring out how that can be translated into an artificial intelligence. He clarifies, “what we’re working on right now is actually having people make ethical decisions, or state what decision they would make in a given situation, and then we use machine learning to try to identify what the general pattern is and determine the extent that we could reproduce those kind of decisions.”

In short, the team is trying to find the patterns in our moral choices and translate this pattern into AI systems. Conitzer notes that, on a basic level, it’s all about making predictions regarding what a human would do in a given situation, “if we can become very good at predicting what kind of decisions people make in these kind of ethical circumstances, well then, we could make those decisions ourselves in the form of the computer program.”

However, one major problem with this is, of course, that morality is not objective — it’s neither timeless nor universal.

Conitzer articulates the problem by looking to previous decades, “if we did the same ethical tests a hundred years ago, the decisions that we would get from people would be much more racist, sexist, and all kinds of other things that we wouldn’t see as ‘good’ now. Similarly, right now, maybe our moral development hasn’t come to its apex, and a hundred years from now people might feel that some of the things we do right now, like how we treat animals, is completely immoral. So there’s kind of a risk of bias and with getting stuck at whatever our current level of moral development is.”

And of course, there is the aforementioned problem regarding how complex morality is. “Pure altruism, that’s very easy to address in game theory, but maybe you feel like you owe me something based on previous actions. That’s missing from the game theory literature, and so that’s something that we’re also thinking about a lot—how can you make this, what game theory calls ‘Solutions Concept’—sensible? How can you compute these things?”

To solve these problems, and to help figure out exactly how morality functions and can (hopefully) be programmed into an AI, the team is combining the methods from computer science, philosophy, and psychology “That’s, in a nutshell, what our project is about,” Conitzer asserts.

But what about those sentient AI? When will we need to start worrying about them and discussing how they should be regulated?

THE HUMAN-LIKE AI

According to Conitzer, human-like artificial intelligence won’t be around for some time yet (so yay! No Terminator-styled apocalypse…at least for the next few years).

“Recently, there have been a number of steps towards such a system, and I think there have been a lot of surprising advances….but I think having something like a ‘true AI,’ one that’s really as flexible, able to abstract, and do all these things that humans do so easily, I think we’re still quite far away from that,” Conitzer asserts.

True, we can program systems to do a lot of things that humans do well, but there are some things that are exceedingly complex and hard to translate into a pattern that computers can recognize and learn from (which is ultimately the basis of all AI).

“What came out of early AI research, the first couple decades of AI research, was the fact that certain things that we had thought of as being real benchmarks for intelligence, like being able to play chess well, were actually quite accessible to computers. It was not easy to write and create a chess-playing program, but it was doable.”

Indeed, today, we have computers that are able to beat the best players in the world in a host of games—Chess and Alpha Go, for example.

But Conitzer clarifies that, as it turns out, playing games isn’t exactly a good measure of human-like intelligence. Or at least, there is a lot more to the human mind. “Meanwhile, we learned that other problems that were very simple for people were actually quite hard for computers, or to program computers to do. For example, recognizing your grandmother in a crowd. You could do that quite easily, but it’s actually very difficult to program a computer to recognize things that well.”

Since the early days of AI research, we have made computers that are able to recognize and identify specific images. However, to sum the main point, it is remarkably difficult to program a system that is able to do all of the things that humans can do, which is why it will be some time before we have a ‘true AI.’

Yet, Conitzer asserts that now is the time to start considering what the rules we will use to govern such intelligences. “It may be quite a bit further out, but to computer scientists, that means maybe just on the order of decades, and it definitely makes sense to try to think about these things a little bit ahead.” And he notes that, even though we don’t have any human-like robots just yet, our intelligence systems are already making moral choices and could, potentially, save or end lives.

“Very often, many of these decisions that they make do impact people and we may need to make decisions that we will typically be considered to be a morally loaded decision. And a standard example is a self-driving car that has to decide to either go straight and crash into the car ahead of it or veer off and maybe hurt some pedestrian. How do you make those trade-offs? And that I think is something we can really make some progress on. This doesn’t require superintelligent AI, this can just be programs that make these kind of trade-offs in various ways.”

But of course, knowing what decision to make will first require knowing exactly how our morality operates (or at least having a fairly good idea). From there, we can begin to program it, and that’s what Conitzer and his team are hoping to do.

So welcome to the dawn of moral robots.

This interview has been edited for brevity and clarity. 

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

MIRI July 2016 Newsletter

Research updates

General updates

News and links

  • The White House is requesting information on “safety and control issues for AI,” among other questions. Public submissions will be accepted through July 22.
  • Concrete Problems in AI Safety“: Researchers from Google Brain, OpenAI, and academia propose a very promising new AI safety research agenda. The proposal is showcased on the Google Research Blog and the OpenAI Blog, as well as the Open Philanthropy Blog, and has received press coverage from Bloomberg, The Verge, and MIT Technology Review.
  • After criticizing the thinking behind OpenAI earlier in the month, Alphabet executive chairman Eric Schmidt comes out in favor of AI safety research:

    Do we worry about the doomsday scenarios? We believe it’s worth thoughtful consideration. Today’s AI only thrives in narrow, repetitive tasks where it is trained on many examples. But no researchers or technologists want to be part of some Hollywood science-fiction dystopia. The right course is not to panic—it’s to get to work. Google, alongside many other companies, is doing rigorous research on AI safety, such as how to ensure people can interrupt an AI system whenever needed, and how to make such systems robust to cyberattacks.

The Problem with Brexit: 21st Century Challenges Require International Cooperation

Retreating from international institutions and cooperation will handicap humanity as we tackle our greatest problems.

The UK’s referendum in favor of leaving the EU and the rise of nationalist ideologies in the US and Europe is worrying on multiple fronts. Nationalism espoused by the likes of Donald Trump (U.S.), Nigel Farage (U.K.), Marine Le Pen (France), and Heinz-Christian Strache (Austria) may lead to a resurgence of some of the worst problems of the first half of 20th century. These leaders are calling for policies that would constrain trade and growth, encourage domestic xenophobia, and increase rivalries and suspicion between countries.

Even more worrying, however, is the bigger picture. In the 21st century, our greatest challenges will require global solutions. Retreating from international institutions and cooperation will handicap humanity’s ability to address our most pressing upcoming challenges.

The Nuclear Age

Many of the challenges of the 20th century – issues of public health, urbanization, and economic and educational opportunity – were national problems that could be dealt with at the national level. July 16th, 1945 marked a significant turning point. On that day, American scientists tested the first nuclear weapon in the New Mexican desert. For the first time in history, individual human beings had within their power a technology capable of destroying all of humanity.

Thus, nuclear weapons became the first truly global problem. Weapons with such a destructive force were of interest to every nation and person on the planet. Only international cooperation could produce a solution.

Despite a dangerous arms race between the US and the Soviet Union — including a history of close calls — humanity survived 70 years without a catastrophic global nuclear war. This was in large part due to international institutions and agreements that discouraged wars and further proliferation.

But what if we replayed the Cold War without the U.N. mediating disputes between nuclear adversaries? And without the bitter taste of the Second World War fresh in the minds of all who participated? Would we still have the same benign outcome?

We cannot say what such a revisionist history would look like, but the chances of a catastrophic outcome would surely be higher.

21st Century Challenges

The 21st century will only bring more challenges that are global in scope, requiring more international solutions. Climate change by definition requires a global solution since carbon emissions will lead to global warming regardless of which countries emit them.

In addition, continued development of new powerful technologies — such as artificial intelligence, biotechnologies, and nanotechnologies — will put increasingly large power in the hands of the people who develop and control them. These technologies have the potential to improve the human condition and solve some of our biggest problems. Yet they also have the potential to cause tremendous damage if misused.

Whether through accident, miscalculation, or madness, misuse of these powerful technologies could pose a catastrophic or even existential risk. If a Cold-War-style arms race for new technologies occurs, it is only a matter of time before a close call becomes a direct hit.

Working Together

As President Obama said in his speech at Hiroshima, “Technological progress without an equivalent progress in human institutions can doom us.”

Over the next century, technological progress can greatly improve the human experience. To ensure a positive future, humanity must find the wisdom to handle the increasingly powerful technologies that it is likely to produce and to address the global challenges that are likely to arise.

Experts have blamed the resurgence of nationalism on anxieties over globalization, multiculturalism, and terrorism. Whatever anxieties there may be, we live in a global world where our greatest challenges are increasingly global, and we need global solutions. If we resist international cooperation, we will battle these challenges with one, perhaps both, arms tied behind our back.

Humanity must learn to work together to tackle the global challenges we face. Now is the time to strengthen international institutions, not retreat from them.