Skip to content

CSER: Playing with Technological Dominoes

Published:
October 12, 2015
Author:
admin

Contents

Playing with Technological Dominoes
Advancing Research in an Era When Mistakes Can Be Catastrophic
by Sophie Hebden
April 7, 2015

Bookmark and Share

The new Centre for the Study of Existential Risk at Cambridge University isn’t really there, at least not as a physical place—not yet. For now, it’s a meeting of minds, a network of people from diverse backgrounds who are worried about the same thing: how new technologies could cause huge fatalities and even threaten our future as a species. But plans are coming together for a new phase for the centre to be in place by the summer: an on-the-ground research programme.


We learn valuable information by creating powerful
viruses in the lab, but risk a pandemic if an accident
releases it. How can we weigh the costs and benefits?

Ever since our ancestors discovered how to make sharp stones more than two and a half million years ago, our mastery of tools has driven our success as a species. But as our tools become more powerful, we could be putting ourselves at risk should they fall into the wrong hands— or if humanity loses control of them altogether. Concerned with bioengineered viruses, unchecked climate change, and runaway artificial intelligence? These are the challenges the Centre for the Study of Existential Risk (CSER) was founded to grapple with.

At its heart, CSER is about ethics and the value you put on the lives of future, unborn people. If we feel any responsibility to the billions of people in future generations, then a key concern is ensuring that there are future generations at all.

The idea for the CSER began as a conversation between a philosopher and a software engineer in a taxi. Huw Price, currently the Bertrand Russell Professor of Philosophy at Cambridge University, was on his way to a conference dinner in Copenhagen in 2011. He happened to share his ride with another conference attendee: Skype’s co-founder Jaan Tallinn.

“I thought, ’Oh that’s interesting, I’m in a taxi with one of the founders of Skype’ so I thought I’d better talk to him,” joked Price. “So I asked him what he does these days, and he explained that he spends a lot of his time trying to persuade people to pay more attention to the risk that artificial intelligence poses to humanity.”

“The overall goal of CSER is to write
a manual for managing and ameliorating
these sorts of risks in future.”
– Huw Price

In the past few months, numerous high-profile figures—including the founders of Google’s DeepMind machine-learning program and IBM’s Watson team—have been voicing concerns about the potential for high-level AI to cause unintended harms. But in 2011, it was startling for Price to find someone so embedded and successful in the computer industry taking AI risk seriously. He met privately with Tallinn shortly afterwards.

Plans came to fruition later at Cambridge when Price spoke to astronomer Martin Rees, the UK’s Astronomer Royal—a man well-known for his interest in threats to the future of humanity. The two made plans for Tallinn to come to the University to give a public lecture, enabling the three to meet. It was at that meeting that they agreed to establish CSER.

Price traces the start of CSER’s existence—at least online—to its website launch in June 2012. Under Rees’ influence, it quickly took on a broad range of topics, including the risks posed by synthetic biology, runaway climate change, and geoengineering.


Huw Price

“The overall goal of CSER,” says Price, painting the vision for the organisation with broad brush strokes, “Is to write a manual, metaphorically speaking, for managing and ameliorating these sorts of risks in future.”

In fact, despite its rather pessimistic-sounding emphasis on risks, CSER is very much pro-technology: if anything, it wants to help developers and scientists make faster progress, declares Rees. “The buzzword is ’responsible innovation’,” he says. “We want more and better-directed technology.”

Its current strategy is to use all its reputational power—which is considerable, as a Cambridge University institute—to gather experts together to decide on what’s needed to understand and reduce the risks. Price is proud of CSER’s impressive set of board members, which includes the world-famous theoretical physicist Stephen Hawking, as well as world leaders in AI, synthetic biology and economic theory.

He is frank about the plan: “We deliberately built an advisory board with a strong emphasis on people who are extremely well-respected to counter any perception of flakiness that these risks can have.”

The plan is working, he says. “Since we began to talk about AI risk there’s been a very big change in attitude. It’s become much more of a mainstream topic than it was two years ago, and that’s partly thanks to CSER.”

Even on more well-known subjects, CSER calls attention to new angles and perspectives on problems. Just last month, it launched a monthly seminar series by hosting a debate on the benefits and risks of research into potential pandemic pathogens.

The seminar focused on a controversial series of experiments by researchers in the Netherlands and the US to try to make the bird flu virus H5N1 transmissible between humans. By adding mutations to the virus they found it could transmit through the air between ferrets—the animal closest to humans when modelling the flu.

The answer isn’t “let’s shout at each
other about whether someone’s going
to destroy the world or not.” The right
answer is, “let’s work together to
develop this safely.”
– Sean O’hEigeartaigh, CSER Executive Director

Epidemiologist Marc Lipsitch of Harvard University presented his calculations of the ’unacceptable’ risk that such research poses, whilst biologist Derek Smith of Cambridge University, who was a co-author on the original H5N1 study, argued why such research is vitally important.

Lipsitch explained that although the chance of an accidental release of the virus is low, any subsequent pandemic could kill more than a billion people. When he combined the risks with the costs, he found that each laboratory doing a single year of research is the equivalent of causing at least 2,000 fatalities. He considers this risk unacceptable. Even if he’s only right within a factor of 1,000, he later told me, then the research is too dangerous.

Smith argued that we can’t afford not to do this research, that knowledge is power—in this case the power to understand the importance of the mutations and how effective our vaccines are at preventing further infections. Research, he said, is essential for understanding whether we need to start “spending millions on preparing for a pandemic that could easily arise naturally—for instance by stockpiling antiviral treatments or culling poultry in China.”

CSER’s seminar series brings the top minds to Cambridge to grapple with important questions like these. The ideas and relationships formed at such events grow into future workshops that then beget more ideas and relationships, and the network grows. Whilst its links across the Atlantic are strongest, CSER is also keen to pursue links with European researchers. “Our European links seem particularly interested in the bio-risk side,” says Price.


Sean O’hEigeartaigh

The scientific attaché to Germany’s government approached CSER in October 2013, and in September 2014 CSER co-organised a meeting with Germany on existential risk. This led to two other workshops on managing risk in biotechnology and research into flu transmission—the latter hosted by Volkswagen in December 2014.

In addition to working with governments, CSER also plans to sponsor visits from researchers and leaders in industry, exchanging a few weeks of staff time for expert knowledge at the frontier of developments. It’s an interdisciplinary venture to draw together and share different innovators’ ideas about the extent and time-frames of risks. The larger the uncertainties, the bigger the role CSER can play in canvassing opinion and researching the risk.

“It’s fascinating to me when the really top experts disagree so much,” says Sean O’hEigeartaigh, CSER’s Executive Director. Some leading developers estimate that human-level AI will be achieved within 30-40 years, whilst others think it will take as long as 300 years. “When the stakes are so high, as they are for AI and synthetic biology, that makes it even more exciting,” he adds.

Despite its big vision and successes, CSER’s path won’t be easy. “There’s a misconception that if you set up a centre with famous people then the University just gives you money; that’s not what happens,” says O’hEigeartaigh.

Instead, they’ve had to work at it, and O’hEigartaigh was brought on board in November 2012 to help grow the organization. Through a combination of grants and individual donors, he has attracted enough funding to install three postdocs, who will be in place by the summer of 2015. Some major grants are in the works, and if all goes well, CSER will be a considerably larger team in the next year.

With a research team on the ground, Price envisions a network of subprojects working on different aspects: listening to experts’ concerns, predicting the timescales and risks more accurately through different techniques, and trying to reduce some of the uncertainties—even a small reduction will help.

Rees believes there’s still a lot of awareness-raising work to do ’front-of-house’: he wants to see the risks posed by AI and synthetic biology become as mainstream as climate change, but without so much of the negativity.

“The answer isn’t ’let’s shout at each other about whether someone’s going to destroy the world or not’,” says O’hEigeartaigh. “The right answer is, ’let’s work together to develop this safely’.” Remembering the animated conversations in the foyer that buzzed with excitement following CSER’s seminar, I feel optimistic: it’s good to know some people are taking our future seriously.

This content was first published at futureoflife.org on October 12, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram