U.S. Conference of Mayors Supports Cambridge Nuclear Divestment

The U.S. Conference of Mayors (USCM) unanimously adopted a resolution at their annual meeting this week in support of nuclear reduction. The resolution called for the next U.S. President to:

  • “pursue diplomacy with other nuclear-armed states,”
  • “participate in negotiations for the elimination of nuclear weapons,” and
  • “cut nuclear weapons spending  and redirect funds to meet the needs of cities.”

In addition, the USCM resolution also praised Cambridge Mayor Denise Simmons and the city council members for their actions to divest from nuclear weapons:

“The USCM commends Mayor Denise Simmons and the Cambridge City Council for demonstrating bold leadership at the municipal level by unanimously deciding on April 2, 2016, to divest their one-billion-dollar city pension fund from all companies involved in production of nuclear weapons systems and in entities investing in such companies.”

In an email to FLI Mayor Simmons expressed her gratitude to the USCM, saying,

“I am honored to receive such commendation from the USCM, and I hope this is a sign that nuclear divestment is just getting started in the United States. Divestment is an important tool that politicians and citizens alike can use to send a powerful message that we want a world safe from nuclear weapons.”

The resolution warns that relations between the U.S. and other nuclear-armed countries are increasingly tenuous. It states, “the nuclear-armed countries are edging ever closer to direct military confrontation in conflict zones around the world.”

Moreover, the Obama administration may have overseen a significant reduction of the nuclear stockpile. But nuclear countries still hold over 15,000 nuclear weapons, with the U.S. possessing nearly half. Furthermore, the President’s budget plans call for $1 trillion to be spent on new nuclear weapons over the next three decades.

These new weapons will include the B61-12, which has increased accuracy and a range of optional warhead sizes. The smallest warhead the B61-12 will carry is roughly 50 times smaller than the bomb dropped on Hiroshima. With smaller explosions and increased accuracy, many experts worry that we may be more likely to use the new nukes.

The USCM would rather see the U.S. government invest more of that $1 trillion back into its cities and communities.

 

What is the USCM?

The USCM represents cities with populations greater than 30,000, for a total of over 1400 cities. Resolutions that they adopt at their annual meeting become official policy for the whole group.

Only 313 American cities are members of the international group, Mayors for Peace, but for 11 years now, the USCM has adopted nuclear resolutions that support Mayors for Peace.

Mayors for Peace was established by Hiroshima Mayor Takeshi Araki in 1982 to decrease the risks of nuclear weapons. To sign on, a mayor must support the elimination of nuclear weapons. In 2013, Mayors for Peace established their 2020 Vision Campaign, which seeks eliminate nuclear weapons by 2020. And as of June 1, 2016, the group counted over 7,000 member cities from over 160 countries. They hope to have 10,000 member cities by 2020.

The USCM’s official press release about this nuclear resolution also added:

“This year, for the first time, New York City Mayor Bill de Blasio and Washington, DC Mayor Muriel Bowser added their names as co-sponsors of the Mayors for Peace resolution.”

Read the official resolution here, along with a complete list of the 23 mayors who sponsored it.

 

Watch as Mayor Simmons announces the Cambridge decision to divest from nuclear weapons at the MIT nuclear conference:

 

Existential Risks Are More Likely to Kill You Than Terrorism

People tend to worry about the wrong things.

According to a 2015 Gallup Poll, 51% of Americans are “very worried” or “somewhat worried” that a family member will be killed by terrorists. Another Gallup Poll found that 11% of Americans are afraid of “thunder and lightning.” Yet the average person is at least four times more likely to die from a lightning bolt than a terrorist attack.

Similarly, statistics show that people are more likely to be killed by a meteorite than a lightning strike (here’s how). Yet I suspect that most people are less afraid of meteorites than lightning. In these examples and so many others, we tend to fear improbable events while often dismissing more significant threats.

One finds a similar reversal of priorities when it comes to the worst-case scenarios for our species: existential risks. These are catastrophes that would either annihilate humanity or permanently compromise our quality of life. While risks of this sort are often described as “high-consequence, improbable events,” a careful look at the numbers by leading experts in the field reveals that they are far more likely than most of the risks people worry about on a daily basis.

Let’s use the probability of dying in a car accident as a point of reference. Dying in a car accident is more probable than any of the risks mentioned above. According to the 2016 Global Challenges Foundation report, “The annual chance of dying in a car accident in the United States is 1 in 9,395.” This means that if the average person lived 80 years, the odds of dying in a car crash will be 1 in 120. (In percentages, that’s 0.01% per year, or 0.8% over a lifetime.)

Compare this to the probability of human extinction stipulated by the influential “Stern Review on the Economics of Climate Change,” namely 0.1% per year.* A human extinction event could be caused by an asteroid impact, supervolcanic eruption, nuclear war, a global pandemic, or a superintelligence takeover. Although this figure appears small, over time it can grow quite significant. For example, it means that the likelihood of human extinction over the course of a century is 9.5%. It follows that your chances of dying in a human extinction event are nearly 10 times higher than dying in a car accident.

But how seriously should we take the 9.5% figure? Is it a plausible estimate of human extinction? The Stern Review is explicit that the number isn’t based on empirical considerations; it’s merely a useful assumption. The scholars who have considered the evidence, though, generally offer probability estimates higher than 9.5%. For example, a 2008 survey taken during a Future of Humanity Institute conference put the likelihood of extinction this century at 19%. The philosopher and futurist Nick Bostrom argues that it “would be misguided” to assign a probability of less than 25% to an existential catastrophe before 2100, adding that “the best estimate may be considerably higher.” And in his book Our Final Hour, Sir Martin Rees claims that civilization has a fifty-fifty chance of making it through the present century.

My own view more or less aligns with Rees’, given that future technologies are likely to introduce entirely new existential risks. A discussion of existential risks five decades from now could be dominated by scenarios that are unknowable to contemporary humans, just like nuclear weapons, engineered pandemics, and the possibility of “grey goo” were unknowable to people in the fourteenth century. This suggests that Rees may be underestimating the risk, since his figure is based on an analysis of currently known technologies.

If these estimates are believed, then the average person is 19 times, 25 times, or even 50 times more likely to encounter an existential catastrophe than to perish in a car accident, respectively.

These figures vary so much in part because estimating the risks associated with advanced technologies requires subjective judgments about how future technologies will develop. But this doesn’t mean that such judgments must be arbitrary or haphazard: they can still be based on technological trends and patterns of human behavior. In addition, other risks like asteroid impacts and supervolcanic eruptions can be estimated by examining the relevant historical data. For example, we know that an impactor capable of killing “more than 1.5 billion people” occurs every 100,000 years or so, and supereruptions happen about once every 50,000 years.

Nonetheless, it’s noteworthy that all of the above estimates agree that people should be more worried about existential risks than any other risk mentioned.

Yet how many people are familiar with the concept of an existential risk? How often do politicians discuss large-scale threats to human survival in their speeches? Some political leaders — including one of the candidates currently running for president — don’t even believe that climate change is real. And there are far more scholarly articles published about dung beetles and Star Trek than existential risks. This is a very worrisome state of affairs. Not only are the consequences of an existential catastrophe irreversible — that is, they would affect everyone living at the time plus all future humans who might otherwise have come into existence — but the probability of one happening is far higher than most people suspect.

Given the maxim that people should always proportion their fears to the best available evidence, the rational person should worry about the above risks in the following order (from least to most risky): terrorism, lightning strikes, meteorites, car crashes, and existential catastrophes. The psychological fact is that our intuitions often fail to track the dangers around us. So, if we want to ensure a safe passage of humanity through the coming decades, we need to worry less about the Islamic State and al-Qaeda, and focus more on the threat of an existential catastrophe.

x-risksarielfigure*Editor’s note: To clarify, the 0.1% from the Stern Report is used here purely for comparison to the numbers calculated in this article. The number was an assumption made at Stern and has no empirical backing. You can read more about this here.

The Challenge of Diversity in the AI World

Let me start this post with a personal anecdote.  At one of the first AI conferences I attended, literally every single one of the 15 or so speakers who presented on the conference’s first day were men.  Finally, about 3/4 of the way through the two-day conference, a quartet of presentations on the social and economic impact of AI included two presentations by women.  Those two women also participated in the panel discussion that immediately followed the presentations–except that “participated” might be a bit strong of a word, because the panel discussion essentially consisted of the two men on the panel arguing with each other for twenty minutes.

It gave off the uncomfortable impression (to me, at least) that even when women are seen in the AI world, it should be expected that they will immediately fade in the background once someone with a Y chromosome shows up. And the ethnic and racial diversity was scarcely better–I probably could count on one hand the number of people credentialed at the conference who were not either white or Asian.

Fast forward to this past week, when the White House’s Office of Science and Technology Policy released a request for information (RFI) on the promise and potential pitfalls of AI.  A Request for Information on AI doesn’t mean that the White House only heard about AI for the first time last week and is looking for someone to send them the link to relevant articles on Wikipedia.  Rather, a request for information issued by a governmental entity is a formal call for public comment on a particular topic that the entity wishes to examine more closely.

The RFI on AI specifies 10 areas (plus one “everything else” option) in which it is seeking comment, including:

(1) the legal and governance implications of AI; (2) the use of AI for public good; (3) the safety and control issues for AI; (4) the social and economic implications of AI; [and] (5) the most pressing, fundamental questions in AI research, common to most or all scientific fields . . .

The first four of these topics are the ones most directly relevant to this blog, and I likely will be submitting a comment on, not surprisingly, topic #1. But one of the most significant AI-related challenges is one that the White House probably was not even thinking about–namely, the AI world’s “sea of dudes” or “white guy” problem.

Anyone who has been to an AI conference in the US or Europe can tell you that the anecdote at the top of this post is not an aberration.  Attendees at AI conferences are predominantly white and overwhelmingly male.  According to a story that ran this week in Bloomberg, 86.3% of NIPS attendees last year were male.  Lack of diversity in the tech industry and in computer science is not new, but as the Bloomberg piece notes, it has particularly worrying implications for AI:

To teach computers about the world, researchers have to gather massive data sets of almost everything. To learn to identify flowers, you need to feed a computer tens of thousands of photos of flowers so that when it sees a photograph of a daffodil in poor light, it can draw on its experience and work out what it’s seeing.

If these data sets aren’t sufficiently broad, then companies can create AIs with biases. Speech recognition software with a data set that only contains people speaking in proper, stilted British English will have a hard time understanding the slang and diction of someone from an inner city in America. If everyone teaching computers to act like humans are men, then the machines will have a view of the world that’s narrow by default and, through the curation of data sets, possibly biased.

In other words, lack of diversity in AI is not merely a social or cultural concern; it actually has serious implications for how the technology itself develops.

A column by Kate Crawford that appeared in the New York Times this weekend makes this point even more aggressively, arguing that AI

may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

Professor Fei-Fei Li–one of the comparatively small number of female stars in the AI research world–brought this point home at a Stanford forum on AI this past week, arguing that whether AI can bring about the “hope we have for tomorrow” depends in part on broadening gender diversity in the AI world.  (Not AI-related, but Phil Torres recently made a post on the Future of Life Institute’s website explaining how greater female participation is necessary to maximize our collective intelligence.)

I actually think the problem goes even deeper than building demographically representative data sets.  I actually question whether AI world’s entire approach to “intelligence” is unduly affected by its lack of diversity.  I suspect that women and people of African, Hispanic, Middle Eastern, etc descent would bring different perspectives on what “intelligence” is and what directions AI research should take.  Maybe I’m wrong about that, but we can’t know unless they are given a seat at the table and a chance to make their voices heard.

So if the White House wants to look at one of “the most pressing, fundamental questions in AI research,” I would heartily suggest that their initiative focus not only on the research itself, but also with the people who are tasked with conducting that research.

Top Scientists Call for Obama to Take Nuclear Missiles off Hair-Trigger Alert

The following post was written by Lisbeth Gronlund, co-director of the Global Security Program for the Union of Concerned Scientists.

More than 90 prominent US scientists, including 20 Nobel laureates and 90 National Academy of Sciences members, sent a letter to President Obama yesterday urging him to take US land-based nuclear missiles off hair-trigger alert and remove launch-on-warning options from US warplans.

As we’ve discussed previously on this blog and elsewhere, keeping these weapons on hair-trigger alert so they can be launched within minutes creates the risk of a mistaken launch in response to false warning of an incoming attack.

This practice dates to the Cold War, when US and Soviet military strategists feared a surprise first-strike nuclear attack that could destroy land-based missiles. By keeping missiles on hair-trigger alert, they could be launched before they could be destroyed on the ground. But as the letter notes, removing land-based missiles from hair-trigger alert “would still leave many hundreds of submarine-based warheads on alert—many more than necessary to maintain a reliable and credible deterrent.”

“Land-based nuclear missiles on high alert present the greatest risk of mistaken launch,” the letter states. “National leaders would have only a short amount of time—perhaps 10 minutes—to assess a warning and make a launch decision before these missiles could be destroyed by an incoming attack.”

Minuteman III launch officers (Source: US Air Force)

Minuteman III launch officers (Source: US Air Force)

Past false alarms

Over the past few decades there have been numerous U.S. and Russian false alarms—due to technical failures, human errors and misinterpretations of data—that could have prompted a nuclear launch. The scientists’ letter points out that today’s heightened tension between the United States and Russia increases that risk.

The scientists’ letter reminds President Obama that he called for taking nuclear-armed missiles off hair-trigger alert after being elected president. During his 2008 presidential campaign, he also noted, “[K]eeping nuclear weapons ready to launch on a moment’s notice is a dangerous relic of the Cold War. Such policies increase the risk of catastrophic accidents or miscalculation.”

Other senior political and military officials have also called for an end to hair-trigger alert.

The scientists’ letter comes at an opportune time, since the White House is considering what steps the president could take in his remaining time in office to reduce the threat posed by nuclear weapons.

New AI Safety Research Agenda From Google Brain

Google Brain just released an inspiring research agenda, Concrete Problems in AI Safety, co-authored by researchers from OpenAI, Berkeley and Stanford. This document is a milestone in setting concrete research objectives for keeping reinforcement learning agents and other AI systems robust and beneficial. The problems studied are relevant both to near-term and long-term AI safety, from cleaning robots to higher-stakes applications. The paper takes an empirical focus on avoiding accidents as modern machine learning systems become more and more autonomous and powerful.

Reinforcement learning is currently the most promising framework for building artificial agents – it is thus especially important to develop safety guidelines for this subfield of AI. The research agenda describes a comprehensive (though likely non-exhaustive) set of safety problems, corresponding to where things can go wrong when building AI systems:

  • Mis-specification of the objective function by the human designer. Two common pitfalls when designing objective functions are negative side-effects and reward hacking (also known as wireheading), which are likely to happen by default unless we figure out how to guard against them. One of the key challenges is specifying what it means for an agent to have a low impact on the environment while achieving its objectives effectively.

  • Extrapolation from limited information about the objective function. Even with a correct objective function, human supervision is likely to be costly, which calls for scalable oversight of the artificial agent.

  • Extrapolation from limited training data or using an inadequate model. We need to develop safe exploration strategies that avoid irreversibly bad outcomes, and build models that are robust to distributional shift – able to fail gracefully in situations that are far outside the training data distribution.

The AI research community is increasingly focusing on AI safety in recent years, and Google Brain’s agenda is part of this trend. It follows on the heels of the Safely Interruptible Agents paper from Google DeepMind and the Future of Humanity Institute, which investigates how to avoid unintended consequences from interrupting or shutting down reinforcement learning agents. We at FLI are super excited that industry research labs at Google and OpenAI are spearheading and fostering collaboration on AI safety research, and look forward to the outcomes of this work.

Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?

This post is part of the Digital Analogues series, which examines the various types of persons or entities to which legal systems might analogize artificial intelligence (AI) systems. This post is the first of two that examines corporate personhood as a potential model for “AI personhood.”  Future posts will examine how AI could also be analogized to pets, wild animals, employees, children, and prisoners.


Could the legal concept of “corporate personhood” serve as a model for how legal systems treat AI?  Ever since the US Supreme Court’s Citizens United decision, corporate personhood has been a controversial topic in American political and legal discourse.  Count me in the group that thinks that Citizens United was a horrible decision and that the law treats corporations a little too much like ‘real’ people.  But I think the fundamental concept of corporate personhood is still sound.  Moreover, the historical reasons that led to the creation of “corporate personhood”–namely, the desire to encourage ambitious investments and the new technologies that come with them–holds lessons for how we may eventually decide to treat AI.

An Overview of Corporate Personhood

For the uninitiated, here is a brief and oversimplified review of how and why corporations came to be treated like “persons” in the eyes of the law.  During late antiquity and the Middle Ages, a company generally had no separate legal existence apart from its owner (or, in the case of partnerships, owners).  Because a company was essentially an extension of its owners, owners were personally liable for companies’ debts and other liabilities.  In the legal system, this meant that a plaintiff who successfully sued a company would be able to go after all of an owner’s personal assets.

This unlimited liability exposure meant that entrepreneurs were unlikely to invest in a company unless they could have a great deal of control over how that company would operate.  That, in turn, meant that companies rarely had more than a handful of owners, which made it very difficult to raise enough money for capital-intensive ventures.  When the rise of colonial empires and (especially) the Industrial Revolution created a need for larger companies capable of taking on more ambitious projects, the fact that companies had no separate legal existence and that their owners were subject to unlimited liability proved frustrating obstacles to economic growth.

The modern corporation was created to resolve these problems, primarily through two key features: legal personhood and limited liability.  “Personhood” means that under the law, corporations are treated like artificial persons, with a legal existence separate from their owners (shareholders).  Like natural persons (i.e., humans), corporations have the right to enter into contracts, own and dispose of assets, and file lawsuits–all in their own name.  “Limited liability” means that the owners of a corporation only stand to lose the amount of money, or capital, that they have invested in the corporation.  Plaintiffs cannot go after a corporate shareholder’s personal assets unless the shareholder engaged in unusual misconduct. Together, these features give a corporation a legal existence that is largely separate from its creators and owners.

Read more

The White House Considers the Future of AI

Artificial intelligence may be on the verge of changing the world forever. In many ways, just the automation and computer-science precursors to AI have already fundamentally changed how we interact, how we do our jobs, how we enjoy our free time, and even how we fight our wars. In the near future, we can expect self-driving cars, automated medical diagnoses, and AI programs predicting who will commit a crime. But our current federal system is woefully unprepared to deal with both the benefits and challenges these advances will bring.

To address these concerns, the White House formed a new Subcommittee on Machine Learning and Artificial Intelligence, which will monitor advances and milestones of AI development for the National Science and Technology Council. The subcommittee began with two conferences about the various benefits and risks that AI poses.

The first conference addressed the legal and governance issues we’ll likely face in the near term, while the second looked at how AI can be used for social good.

While many of the speakers, especially in the first conference, emphasized the need to focus on short-term concerns over long-term concerns of artificial general intelligence (AGI), the issues they discussed are also some of those we’ll need to address in order to ensure beneficial AGI.

For example, Oren Etzioni kicked off the conferences by arguing that we have ample time to address the longer term concerns about AGI in the future, and should therefore focus our current efforts on issues like jobs and privacy. But in response to a question from the audience, he expressed a more nuanced view: “As software systems become more complex … we will see more unexpected behavior … The software that we’ve been increasingly relying on will behave unexpectedly.” He also pointed out that we need to figure out how to deal with people who will do bad things with good AI systems and not just worry about AI that goes bad.

This viewpoint set the tone for the rest of the first conference.

Kate Crawford talked about the need for accountability and considered how difficult transparency can be for programs that essentially act as black boxes. Almost all of the speakers expressed concern about maintaining privacy, but Pedro Domingos added that privacy concerns are more about control:

“Who controls data about me?”

Another primary concern among all researchers was about the misuse of data and the potential for bad people to intentionally misuse AI. Bryant Walker Smith wondered who would decide when an AI was safe enough to be unleashed to the general public, and many of the speakers wondered how we should deal with an AI system that doesn’t behave as intended or that learns bad behavior from its new owners. Domingos mentioned that learning systems are fallible, and they often fail in ways different from people. This makes it even more difficult to predict how an AI system will behave outside of the lab.

Kay Firth-Butterfield, the Chief Officer of the Ethics Advisory Panel for Lucid, attended the conference, and in an email to FLI, she gave an example of how the research presented at these conferences can help us be better prepared as we move toward AGI. She said:

“I think that focus on the short-term benefits and concerns around AI can help to inform the work which needs to be done for understanding our interaction as humans with AGI. One of the short-term issues is transparency of decision making by Machine Learning AI. This is a short-term concern because it affects citizens’ rights if used by the government to assist decision making, see Danielle Citron’s excellent work in this area. Finding a solution to this issue now paves the way for greater clarity of systems in the future.”

While the second conference focused on some of the exciting applications research happening now with AI and big data, the speakers also brought up some of the concerns from the first conference, as well as some new issues.

Bias was a big concern, with White House representatives, Roy Austin and Lynn Overmann, both mentioning challenges they face in using inadvertently biased AI programs to address crime, police brutality and the criminal justice system. Privacy was another issue that came up frequently, especially as the speakers talked about improving the health system, using social media for data mining, and using citizen science. And simply trying to predict where AI would take us, and thus where to spend time and resources was another concern that speakers brought up.

But on the whole, the second conference was very optimistic, offering a taste of how AI can move us toward existential hope.

For example, traffic is estimated to cost the US $121 billion per year in lost time and fuel, while releasing 56 billion pounds of CO2 into the atmosphere. Stephen Smith is working on programs that can improve traffic lights to anticipate traffic, rather than react to it, saving people time and money.

Tom Dietterich discussed two programs he’s working on. TAHMO is a project to better understand weather patterns in Africa, which will, among other things, improve farming operations across the continent. He’s also using volunteer data to track bird migration, which can amount to thousands of data points per day. That’s data which can then be used to help coastal birds whose habitats will be destroyed as sea levels rise.

Milind Tambe created algorithms based on game theory to improve security at airports and shipping ports, and now he’s using similar programs to help stop poaching in places like Uganda and Malaysia.

Tanya Berger Wolf is using crowdsourcing for conservation. Her project relies on pictures uploaded by thousands of tourists to track various animals and herds to better understand their lifestyles and whether or not the animals are at risk. The AI programs she employs can track specific animals via the uploaded images, just based on small variations of visible patterns on the skin and fur.

Erik Elster explained that each one of us will likely be misdiagnosed at least once because doctors still rely on visual diagnosis. He’s working to leverage machine learning to make more effective use of big data in medical science and procedures to improve diagnosis and treatment.

Henry Kautz collected data from social media to start predicting who would be getting sick, before they started showing any signs of a flu or cold.

Eric Horvitz discussed ten fields that could see incredible benefits from AI advancements, from medicine to agriculture to education to overall wellbeing.

Yet even while highlighting all the amazing ways AI can help improve our lives, these projects also shed some light on problems that might be minor nuisances in the short-term, but could become severe if we don’t solve them before strong AI is developed.

Kautz talked about how difficult accurate information is to come by, given that people are so reticent to share their health data with strangers. Smith mentioned that one of the problems his group ran into during an early iteration of their traffic lights project was forgetting to take pedestrians into account. These are minor issues now, but forgetting to take into account a major component — like pedestrians — could become a much larger problem as AI advances and lives are potentially on the line.

In an email, Dietterich summed up the hopes and concerns of the second conference:

“At the AI for Social Good meeting, several people reported important ways that AI could be applied to address important problems. However, people also raised the concern that naively applying machine learning algorithms to biased data will lead to biased outcomes. One example is that volunteer bird watchers only go where they expect to find birds and police only patrol where they expect to find criminals.  This means that their data are biased. Fortunately, in machine learning we have developed algorithms for explicitly modeling the bias and then correcting for it. When the bias is extreme, nothing can be done, but when there is enough variation in the data, we can remove the bias and still produce useful results.”

Up to a point, artificial intelligence should be able to self-correct. But we need to recognize and communicate its limitations just as much as we appreciate its abilities.

Hanna Wallach also discussed her research which looked at the problems that arose because computer scientists and social scientists so often talk past each other. It’s especially important for computer scientists to collaborate with social scientists in order to understand the implications of their models.  She explained, “When data points are human, error analysis takes on a new level of importance.”

Overmann ended the day by mentioning that, as much as her team needs AI researchers, the AI researchers need the information her group has gathered as well. AI programs will naturally run better when better data is used to design them.

This last point is symbolic of all of the issues that were brought up about AI. While never mentioned explicitly, an underlying theme of the conferences is that AI research can’t just occur in a bubble: It must take into account all of the complexities and problems of the real world, and society as a whole must consider how we will incorporate AI into our lives.

In her email to FLI, Firth-Butterfield also added,

“I think that it is important to ask the public for their opinion because it is they who will be affected by the growth of AI. It is essential to beneficial and successful AI development to be cognizant of public opinion and respectful of issues such as personal privacy.”

The administration is only halfway through the conference series, so it will be looking at all of this and more as it determines where to focus its support for artificial intelligence research and development.

The next conference, Safety and Control of Artificial Intelligence, will be held on June 28 in Pittsburgh.

MIRI’s June 2016 Newsletter

Research updates

General updates

News and links

The Collective Intelligence of Women Could Save the World

Neil deGrasse Tyson was once asked about his thoughts on the cosmos. In a slow, gloomy voice, he intoned, “The universe is a deadly place. At every opportunity, it’s trying to kill us. And so is Earth. From sinkholes to tornadoes, hurricanes, volcanoes, tsunamis.” Tyson humorously described a very real problem: the universe is a vast obstacle course of catastrophic dangers. Asteroid impacts, supervolcanic eruptions, and global pandemics represent existential risks that could annihilate our species or irreversibly catapult us back into the Stone Age.

But nature is the least of our worries. Today’s greatest existential risks stem from advanced technologies like nuclear weapons, biotechnology, synthetic biology, nanotechnology, and even artificial superintelligence. These tools could trigger a disaster of unprecedented proportions. Exacerbating this situation are “threat multipliers” — issues like climate change and biodiveristy loss, which, while devastating in their own right, can also lead to an escalation of terrorism, pandemics, famines, and potentially even the use of WTDs (weapons of total destruction).

The good news is that none of these existential threats are inevitable. Humanity can overcome every single known danger. But accomplishing this will require the smartest groups working together for the common good of human survival.

So, how do we ensure that we have the smartest groups working to solve the problem?

Get women involved.

A 2010 study, published in Science, made two unexpected discoveries. First, it established that groups can exhibit a collective intelligence (or c factor). Most of us are familiar with general human intelligence, which describes a person’s intelligence level across a broad spectrum of cognitive tasks. It turns out groups also have a similar “collective” intelligence that determines how successfully they can navigate these cognitive tasks. This is an important finding because “research, management, and many other kinds of tasks are increasingly accomplished by groups — working both face-to-face and virtually.” To optimize group performance, we need to understand what makes a group more intelligent.

This leads to the second unexpected discovery. Intuitively, one might think that groups with really smart members will themselves be really smart. This is not the case. The researchers found no strong correlation between the average intelligence of members and the collective intelligence of the group. Similarly, one might suspect that the group’s IQ will increase if a member of the group has a particularly high IQ. Surely a group with Noam Chomsky will perform better than one in which he’s replaced by Joe Schmo. But again, the study found no strong correlation between the smartest person in the group and the group’s collective smarts.

Instead, the study found three factors linked to group intelligence. The first pertains to the “social sensitivity” of group members, measured by the “Reading the Mind in the Eyes” test. This term refers to one’s ability to infer the emotional states of others by picking up on certain non-verbal clues. The second concerns the number of speaking turns taken by members of the group. “In other words,” the authors write, “groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking.”

The last factor relates to the number of female members: the more women in the group, the higher the group’s IQ. As the authors of the study explained, “c was positively and significantly correlated with the proportion of females in the group.” If you find this surprising, you’re not alone: the authors themselves didn’t anticipate it, nor were they looking for a gender effect.

Why do women make groups smarter? The authors suggest that it’s because women are, generally speaking, more socially sensitive than men, and the link between social sensitivity and collective intelligence is statistically significant.

Another possibility is that men tend to dominate conversations more than women, which can disrupt the flow of turn-taking. Multiple studies have shown that women are interrupted more often than men; that when men interrupt women, it’s often to assert dominance; and that men are more likely to monopolize professional meetings. In other words, there’s robust empirical evidence for what the writer and activist Rebecca Solnit describes as “mansplaining.”

These data have direct implications for existential riskology:

Given the unique, technogenic dangers that haunt the twenty-first century, we need the smartest groups possible to tackle the problems posed by existential risks. We need groups comprised of women.

Yet the existential risk community is marked by a staggering imbalance of gender participation. For example, a random sample of 40 members of the “Existential Risk” group on Facebook (of which I am an active member) included only 3 women. Similar asymmetries can be found in many of the top research institutions working on global challenges.

This dearth of female scholars constitutes an existential emergency. If the studies above are correct, then the groups working on existential risk issues are not nearly as intelligent as they could be.

The obvious next question is: How can the existential risk community rectify this potentially dangerous situation? Some answers are implicit in the data above: for example, men could make sure that women have a voice in conversations, aren’t interrupted, and don’t get pushed to the sidelines in conversations monopolized by men.

Leaders of existential risk studies should also strive to ensure that women are adequately represented at conferences, that their work is promoted to the same extent as men’s, and that the environments in which existential risk scholarship takes place is free of discrimination. Other factors that have been linked to women avoiding certain fields include the absence of visible role models, the pernicious influence of gender stereotypes, the onerous demands of childcare, a lack of encouragement, and the statistical preference of women for professions that focus on “people” rather than “things.”

No doubt there are other factors not mentioned, and other strategies that could be identified. What can those of us already ensconced in the field do to achieve greater balance? What changes can the community make to foster more diversity? How can we most effectively maximize the collective intelligence of teams working on existential risks?

As Sir Martin Rees writes in Our Final Hour, “what happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.” Future generations may very well thank us for taking the link between collective intelligence and female participation seriously.

Note: there’s obviously a moral argument for ensuring that women have equal opportunities, get paid the same amount as men, and don’t have to endure workplace discrimination. The point of this article is to show that even if one brackets moral considerations, there are still compelling reasons for making the field more diverse. (For more , see chapter 14 of my book, which  lays out a similar argument.

Digital Analogues (Intro): Artificial Intelligence Systems Should Be Treated Like…

This piece was originally published on Medium in Imaginary Papers, an online publication of Arizona State University’s Center for Science and the Imagination.  Matt Scherer runs the Law and AI blog.


Artificial intelligence (A.I.) systems are becoming increasingly ubiquitous in our economy and society, and are being designed with an ever-increasing ability to operate free of direct human supervision. Algorithmic trading systems account for a huge and still-growing share of stock market transactions, and autonomous vehicles with A.I. “drivers” are already being tested on the roads. Because they operate with less human supervision and control than earlier technologies, the rising prevalence of autonomous A.I. raises the question of how legal systems can ensure that victims receive compensation if (read: when) an A.I. system causes physical or economic harm during the course of its operations.

An increasingly hot topic in the still-small world of people interested in the legal issues surrounding A.I. is whether an autonomous A.I. system should be treated like a “person” in the eyes of the law. In other words, should we give A.I. systems some of the rights and responsibilities normally associated with natural persons (i.e., humans)? If so, precisely what rights should be granted to A.I. systems and what responsibilities should be imposed on them? Should human actors be assigned certain responsibilities in terms of directing and supervising the actions of autonomous systems? How should legal responsibility for an A.I. system’s behavior be allocated between the system itself and its human owner, operator, or supervisor?

Because it seems unlikely that Congress will be passing A.I. liability legislation in the near future, it likely will fall to the court system to answer these questions. In so doing, American courts will likely use the tried-and-tested common law approach of analogizing A.I. systems to something(s) in other areas of the law.

So, what are the potential analogues that could serve as a model for how the legal system treats A.I.?

Corporate personhood provides what is perhaps the most obvious model for A.I. personhood. Corporations have, at a minimum, a right to enter into contracts as well as to buy, own, and sell property. A corporation’s shareholders are only held liable for the debts of and injuries caused by the corporation if the shareholders engage in misconduct of some kind — say, by knowingly failing to invest enough money in the corporation to cover its debts, or by treating the corporation’s financial assets as a personal piggy bank. We could bestow a similar type of limited legal personhood on A.I. systems: give them property rights and the ability to sue and be sued, and only leave their owners on the hook under limited sets of circumstances. Of course, the tensions created by corporate personhood would likely be repeated with A.I. systems. Should personhood for an A.I. system include a right to free speech and direct liability for criminal acts?

Alternatively, we could treat the relationship between an A.I. system and its owner as akin to the relationship between an animal and its owner. Under traditional common law, if a “wild” animal that is considered dangerous by nature and kept as a pet, the animal’s owner is “strictly liable” for any harm that the animal causes. That means that if Farmer Jones’ pet wolf Fang escapes and kills two of Farmer Smith’s chickens, Farmer Jones is legally responsible for compensating Farmer Smith for the lost chickens, even if Fang had always been perfectly tame previously.

For domestic animals kept as pets, however, the owner generally must have some knowledge of that specific animal’s “dangerous propensities.” If Fang was a Chihuahua instead of a wolf, Farmer Smith might be out of luck unless he could show that Fang had previously shown flashes of violence. Perhaps certain A.I. systems that seem particularly risky, like autonomous weapon systems, could be treated like wild animals, while systems that seem particularly innocuous or that have a proven safety record are treated like domestic animals.

If we want to anthropomorphize the legal treatment of A.I. systems, we could treat them like employees and their owners like employers. American employers generally have a duty to exercise care in the hiring and supervision of employees. We might similarly require owners to exercise care when buying an A.I. system to serve in a particular role and to ensure that a system receives an adequate level of supervision, particularly if the system’s owner knows that it poses a particular risk.

And if we really want to anthropomorphize A.I. systems, we could analogize them to children and impose parent-like responsibilities on their owners. Like children, we could recognize only very limited types of rights for novel A.I. systems, but grant them additional rights as they “mature” — at least as long as they are not naughty. And like parents, we could hold a system’s owner civilly — and perhaps even criminally — liable if the system causes harm while in the “care” of the owner.

To close on a completely different note, perhaps A.I. systems should be treated like prisoners. Prisoners start out as ordinary citizens from the perspective of the law, but they lose civil rights and are required to take on additional responsibilities after they commit criminal acts. A recklessly forward-thinking approach to A.I. personhood might similarly start with the assumption that A.I. systems are people too, and give them the full panoply of civil rights that human beings enjoy. If a system breaks the law, however, society would reserve the right to punish it, whether by placing it on a form of “probation” requiring additional supervision, “boxing it in” by limiting its freedom to operate, or even by imposing the digital equivalent of the death penalty. Of course, these punishments would prove difficult to impose if the system is cloud-based or is otherwise inseparably distributed across multiple jurisdictions.


Which of these analogies appeals to you most likely depends on how skeptical you are of A.I. technologies and whether you believe it is morally and ethically acceptable to recognize “personhood” in artificial systems. In the end, legal systems will undoubtedly come up with unique ways of handling cases involving A.I.-caused harm. But these five digital analogues may provide us with a glimpse of how this emerging area of law may develop.

The Vicious Cycle of Ocean Currents and Global Warming: Slowing Thermohaline Circulation

The world’s oceans play a major role in mitigating the greenhouse effect, as they absorb roughly a quarter of all carbon dioxide (CO2) emissions. As this atmospheric CO2 mixes with the ocean’s surface, it forms carbonic acid, and when carbon uptake occurs on a massive scale—as it has for the past few decades—the ocean acidifies. Coral reefs and shell-forming animals are especially susceptible to overly acidic water, and their possible extinction has led to the most vocal concerns about CO2 in the ocean.

Yet despite fears that much of today’s marine life could go extinct, this process of carbon uptake in the oceans could result in an even more disturbing cycle: increased atmospheric CO2 could stall ocean currents that are essential to maintaining global temperatures, thus accelerating global warming.

Warm salt water travels north from the South Atlantic Ocean to the Arctic where it cools, becomes more saline, sinks and travels back south. This process is known as thermohaline circulation, and it moves an enormous amount of heat through the Atlantic Ocean, maintaining present climates. The Gulf Stream is the most well-known ocean current, but NASA has created a helpful global animation of the entire process of thermohaline circulation.

Today, increasing levels of carbon dioxide absorption in the Atlantic Ocean threaten to slow these important currents and endanger the ocean’s ability to absorb our emissions.

Yet this is a threat that has been recognized for at least twenty years.  In 1996, researchers Jorge Sarmiento and Corinne Le Quere found that ocean warming weakens thermohaline circulation. They concluded that this “weakened circulation reduces the ability of the ocean to absorb carbon dioxide, making the climate system even less forgiving of human emissions.” A year later, climate scientist Stefan Rahmstorf sought to understand the effects of doubling atmospheric carbon dioxide on the strength of thermohaline circulation. He looked at multiple model scenarios and discovered that thermohaline circulation could decrease by 20% to as much as 50%.

These findings suggest that if we continue to emit carbon dioxide on a large scale, we may soon be unable to rely on the ocean’s buffering capacity to mitigate our greenhouse effect.

Now, if the oceans, specifically the Atlantic Ocean, lose their ability to absorb massive amounts of carbon dioxide, presumably the process of ocean acidification will slow down, as well. But while this is a positive consequence of the ocean’s diminishing buffering capacity, it comes as a package deal with an increased level of carbon dioxide lingering in the atmosphere, augmenting the greenhouse effect. Nature might self-equilibrate to the benefit of coral reefs and shell-forming marine life, but scientists fear that the resulting increase of atmospheric carbon dioxide will further diminish thermohaline circulation and escalate the problem of global warming. This would lead to rising ocean temperatures and more Arctic ice melting.

Complicating this web of causes and effects, when more Arctic ice melts, it freshens the incoming salt water. As explained by Chris Mooney of the Washington Post, “if the water is less salty it will also be less dense, reducing its tendency to sink below the surface. This could slow or even eventually shut down the circulation.” This consequence feeds a cycle that decreases the buffering capacity of the oceans and raises ocean temperatures. Further complicating this relationship, a 2013 report on ocean acidification by the Congressional Research Service noted that “all gases, including CO2, are less soluble in water as temperature increases.” Thus, it seems inevitable that the oceans will become worse at absorbing carbon dioxide, that thermohaline circulation will diminish further, and that global warming will accelerate.

One can begin to see the multifaceted positive feedback cycle at work here. As we emit more carbon dioxide into the atmosphere, ocean temperature rises, Arctic ice melts, thermohaline circulation slows, and the ocean’s capacity to absorb carbon dioxide diminishes. This allows more carbon dioxide to enter the atmosphere, which causes the ocean temperatures to rise faster, the ice to melt faster, thermohaline circulation to slow further, and the ocean’s capacity to absorb carbon dioxide to diminish further. The process threatens to continue ad infinitum if we don’t cut carbon emissions. Sarmiento and Le Quere concluded their study with a warning: “the magnitude of future CO2 responses to such changes would be greatly magnified because of the reduced buffering capacity of the oceans under increased atmospheric CO2.”

This disturbing cycle highlights the ocean’s integral role in mitigating global warming, and makes it all the more urgent to find practical ways to cut carbon dioxide emissions. As complex and interconnected as this web of causes and effects is, carbon dioxide emissions are undeniably the root cause. While scientists and policy advisors have understood the dangers of carbon dioxide emissions for years, a deeper understanding of the ocean’s relationship with carbon dioxide offers further evidence of the need to begin limiting emissions now.

How Could a Failed Computer Chip Lead to Nuclear War?

The US early warning system is on watch 24/7, looking for signs of a nuclear missile launched at the United States. As a highly complex system with links to sensors around the globe and in space, it relies heavily on computers to do its job. So, what happens if there is a glitch in the computers?

Between November 1979 and June 1980, those computers led to several false warnings of all-out nuclear attack by the Soviet Union—and a heart-stopping middle-of-the-night telephone call.

NORA command post, c. 1982. (Source: US National Archives)

NORA command post, c. 1982. (Source: US National Archives)

I described one of these glitches previously. That one, in 1979, was actually caused by human andsystems errors: A technician put a training tape in a computer that then—inexplicably—routed the information to the main US warning centers. The Pentagon’s investigator stated that they were never able to replicate the failure mode to figure out what happened.

Just months later, one of the millions of computer chips in the early warning system went haywire, leading to incidents on May 28, June 3, and June 6, 1980.

The June 3 “attack”

By far the most serious of the computer chip problems occurred on  early June 3, when the main US warning centers all received notification of a large incoming nuclear strike. The president’s National Security Advisor Zbigniew Brezezinski woke at 3 am to a phone call telling him a large nuclear attack on the United States was underway and he should prepare to call the president. He later said he had not woken up his wife, assuming they would all be dead in 30 minutes.

Like the November 1979 glitch, this one led NORAD to convene a high-level “Threat Assessment Conference,” which includes the Chair of the Joint Chiefs of Staff and is just below the level that involves the president. Taking this step sets lots of things in motion to increase survivability of U.S. strategic forces and command and control systems. Air Force bomber crews at bases around the US got in their planes and started the engines, ready for take-off. Missile launch offices were notified to standby for launch orders. The Pacific Command’s Airborne Command Post took off from Hawaii. The National Emergency Airborne Command Post at Andrews Air Force Base taxied into position for a rapid takeoff.

The warning centers, by comparing warning signals they were getting from several different sources, were able to determine within a few minutes they were seeing a false alarm—likely due to a computer glitch. The specific cause wasn’t identified until much later. At that point, a Pentagon document matter-of-factly stated that a 46-cent computer chip “simply wore out.”

Short decision times increase nuclear risks

As you’d hope, the warning system has checks built into it. So despite the glitches that caused false readings, the warning officers were able to catch the error in the short time available before the president would have to make a launch decision.

We know these checks are pretty good because there have been a surprising number of incidents like these, and so far none have led to nuclear war.

But we also know they are not foolproof.

The risk is compounded by the US policy of keeping its missile on hair-trigger alert, poised to be launched before an incoming attack could land. Maintaining an option of launching quickly on warning of an attack makes the time available for sorting out confusing signals and avoiding a mistaken launch very short.

For example, these and other unexpected incidents have led to considerable confusion on the part of the operators. What if the confusion had persisted longer? What might have happened if something else had been going on that suggested the warning was real? In his book, My Journey at the Nuclear Brink, former Secretary of Defense William Perry asks what might have happened if these glitches “had occurred during the Cuban Missile Crisis, or a Mideast war?”

There might also be unexpected coincidences. What if, for example, US sensors had detected an actual Soviet missile launch around the same time? In the early 1980s the Soviets were test launching 50 to 60 missiles per year—more than one per week. Indeed, US detection of the test of a Soviet submarine-launch missile had led to a Threat Assessment Conference just weeks before this event.

Given enough time to analyze the data, warning officers on duty would be able to sort out most false alarms. But the current system puts incredible time pressure on the decision process, giving warning officers and then more senior officials only a few minutes to assess the situation. If they decide the warning looks real, they would alert the president, who would have perhaps 10 minutes to decide whether to launch.

Keeping missiles on hair-trigger alert and requiring a decision within minutes of whether or not to launch is something like tailgating when you’re driving on the freeway. Leaving only a small distance between you and the car in front of you reduces the time you have to react. You may be able to get away with it for a while, but the longer you put yourself in that situation the greater the chance that some unforeseen situation, or combination of events, will lead to disaster.

In his book, William Perry makes a passionate case for taking missiles off alert:

“These stories of false alarms have focused a searing awareness of the immense peril we face when in mere minutes our leaders must make life-and-death decisions affecting the whole planet. Arguably, short decision times for response were necessary during the Cold War, but clearly those arguments do not apply today; yet we are still operating with an outdated system fashioned for Cold War exigencies.

“It is time for the United States to make clear the goal of removing all nuclear weapons everywhere from the prompt-launch status in which nuclear-armed ballistic missiles are ready to be launched in minutes.”

Wheel of Near Misfortune

 

To see what other incidents have increased the risks posed by nuclear weapons over the years, visit our new Wheel of Near Misfortune.

Writing the Human Genome

The Human Genome Project made big news in the early 2000s when an international group of scientists successfully completed a decade-long endeavor to map out the entirety of the human genome. Then, last month, genetic researchers caused some minor controversy when a group of about 150 scientists, lawyers and entrepreneurs met behind closed doors to discuss “writing” the human genome – that is, synthesizing the human DNA sequences from scratch.

In response to the uproar, the group published a short article in Science this week, explaining the basic ideas behind their objectives.

The project, HGP-write (human genome project – write), is led by Jef D. Boeke, Andrew Hessel, Nancy J. Kelley, and FLI science advisory board member George Church, though over 20 participants helped pen the Science article. In the article, they explain, “Genome synthesis is a logical extension of the genetic engineering tools that have been used safely within the biotech industry for ~40 years and have provided important societal benefits.”

Recent advances in genetics and biotech, such as the explosion of CRISPR-cas9 and even the original Human Genome Project, have provided glimpses into a possible future in which we can cure cancer, ward off viruses, and generate healthy human organs. Scientists involved with HGP-write hope this project will finally help us achieve those goals. They wrote:

Potential applications include growing transplantable human organs; engineering immunity to viruses in cell lines via genome-wide recoding (12); engineering cancer resistance into new therapeutic cell lines; and accelerating high-productivity, cost-efficient vaccine and pharmaceutical development using human cells and organoids.

While there are clearly potential benefits to this technology, concerns about the project are to be expected, especially given the closed-door nature of the meeting. In response to the meeting last month, Drew Endy and Laurie Zoloth argued:

Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real, like today’s Harvard conference, should not take place without open and advance consideration of whether it is morally right to proceed.

The director of the National Institutes of Health, Francis S. Collins, was equally hesitant to embrace the project. In a statement to the New York Times, he said, “whole-genome, whole-organism synthesis projects extend far beyond current scientific capabilities, and immediately raise numerous ethical and philosophical red flags.”

In the Science article, the researchers of HGP-write insist that “HGP-write will require public involvement and consideration of ethical, legal, and social implications (ELSI) from the start.” This is a point Church reiterated to the Washington Post, explaining that there were already ELSI researchers who participated in the original meeting and that he expects more researchers to join as a response to the Science article.

The primary goal of the project is “to reduce the costs of engineering and testing large (0.1 to 100 billion base pairs) genomes in cell lines by over 1000-fold within 10 years.” The HGP-write initiative hopes to launch this year “with $100 million in committed support,” and they plan to complete the project for less than the $3 billion price tag of the original Human Genome Project.