What You Should Really Be Scared of on Halloween

It was four days before Halloween and the spirits were tense, both those above and those lurking in the waters below. There was agitation and busy preparation everywhere, and a sense of gloom and doom was weighing heavily on everyone’s minds. Deep in the waters the heat was rising, and the lost ones were finding no rest. Provoked by the world above, they were ready to unleash their curse. Had the time come for the world as they knew it to end?

It was indeed four days before Halloween: October 27, 1962. The spirits were tense, both those above, in the eleven US Navy destroyers and the aircraft carrier USS Randolph, and those lurking down in the waters below in the nuclear-armed Soviet submarine B-59. There was agitation and busy preparation everywhere due to the Cuban Missile Crisis, and a sense of gloom and doom was weighing heavily on everyone’s minds. Deep in the waters the heat rose past 45ºC (113ºF) as the submarine’s batteries were running out and the air-conditioning had stopped. On the verge of carbon dioxide poisoning, many crew members fainted. The crew was feeling lost and unsettled, as there had been no contact with Moscow for days and they didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges at them. “We thought – that’s it – the end”, crewmember V.P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.”

The world above was blissfully unaware that Captain Savitski decided to launch the nuclear torpedo. Valentin Grigorievich, the torpedo officer, exclaimed: “We will die, but we will sink them all – we will not disgrace our Navy!” In those brief moments it looked like the time may have come for the world as was known to end, creating more ghosts than Halloween had ever known.

Luckily for us, the decision to launch had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no. The chilling thought of how close we humans were to destroying everything we cherish makes this the scariest Halloween story. Like a really good Halloween story, this one has not a happy ending, but a suspenseful one in which we’ve only barely avoided the curse, and the danger remains with us. And like the very best Halloween stories, this one grew ever scarier over the years, as scientists came to realize that a dark smoky Halloween cloud might enshroud Earth for ten straight Halloweens, causing a decade-long nuclear winter producing not millions but billions of ghosts.

Right now, we humans have over 15,000 nuclear weapons, most of which are over a hundred times more powerful than those that obliterated Hiroshima and Nagasaki. Many of these weapons are kept on hair-trigger alert, ready to launch within minutes, increasing the risk of World War III starting by accident just as on that fateful Halloween 53 years ago. As more Halloweens pass, we accumulate more harrowing close calls, more near-encounters with the ghosts.

This Halloween you might want to get spooked by watching an explosion, read about the blood-curdling nuclear war close calls we’ve had in the past decades, and then hopefully you will do something to keep the curse away, in the hope that one Halloween we’ll be able to say: nuclear war – nevermore.

This article can also be found on the Huffington Post and on MeiasMusings.

Grants Program Press Release

New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial

Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research

July 1, 2015
Amid rapid industry investment in developing smarter artificial intelligence, a new branch of research has begun to take off aimed at ensuring that society can reap the benefits of AI while avoiding potential pitfalls.

The Boston-based Future of Life Institute (FLI) announced the selection of 37 research teams around the world to which it plans to award about $7 million from Elon Musk and the Open Philanthropy Project as part of a first-of-its-kind grant program dedicated to “keeping AI robust and beneficial”. The program launches as an increasing number of high-profile figures including Bill Gates, Elon Musk and Stephen Hawking voice concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project led by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
  • A new Oxford-Cambridge research center for studying AI-relevant policy

As Skype founder Jaan Tallinn, one of FLI’s founders, has described this new research direction, “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”

When the Future of Life Institute issued an open letter in January calling for research on how to keep AI both robust and beneficial, it was signed by a long list of AI researchers from academia, nonprofits and industry, including AI research leaders from Facebook, IBM, and Microsoft and the founders of Google’s DeepMind Technologies. It was seeing that widespread agreement that moved Elon Musk to seed the research program that has now begun.

“Here are all these leading AI researchers saying that AI safety is important”, said Musk at the time. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

“I am glad to have an opportunity to carry this research focused on increasing the transparency of AI robotic systems,” said Manuela Veloso, past president of the Association for the Advancement of Artificial Intelligence (AAAI) and winner of one of the grants.

“This grant program was much needed: because of its emphasis on safe AI and multidisciplinarity, it fills a gap in the overall scenario of international funding programs,” added Prof. Francesca Rossi, president of the International Joint Conference on Artificial Intelligence (IJCAI), also a grant awardee.

Tom Dietterich, president of the AAAI, described how his grant — a project studying methods for AI learning systems to self-diagnose when failing to cope with a new situation — breaks the mold of traditional research:

“In its early days, AI research focused on the ‘known knowns’ by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the ‘known unknowns’ by using probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the ‘unknown unknowns’: How can an AI system behave carefully and conservatively in a world populated by unknown unknowns — aspects that the designers of the AI system have not anticipated at all?”

As Terminator Genisys debuts this week, organizers stressed the importance of separating fact from fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, said FLI president Max Tegmark. “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”

The full list of research grant winners can be found here. The plan is to fund these teams for up to three years, with most of the research projects starting by September 2015, and to focus the remaining $4M of the Musk-backed program on the areas that emerge as most promising.

FLI has a mission to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

Contacts at the Future of Life Institute:

  • Max Tegmark: max@futureoflife.org
  • Meia Chita-Tegmark: meia@futureoflife.org
  • Jaan Tallinn: jaan@futureoflife.org
  • Anthony Aguirre: anthony@futureoflife.org
  • Viktoriya Krakovna: vika@futureoflife.org
  • Jesse Galef: jesse@futureoflife.org

 

From Physics Today: China’s no-first-use nuclear policy

“China’s entire nuclear weapons posture, and its relatively small arsenal of about 250 warheads, is based on its pledge of no first use, according to Pan Zhenqiang, former director of strategic studies at China’s National Defense University.
Although that pledge is “extremely unlikely” to change, missile defense, space-based weapons, or other new technologies that threaten the credibility of China’s deterrent could lead to a policy shift and a buildup of its nuclear stockpile, said Pan, who is also a retired major general in the People’s Liberation Army.”

Read the full story here.

From Time: How We Can Overcome the Risks of AI

Apple’s recent acquisition of Vocal IQ, an artificial intelligence company that specializes in voice programs, should not on its face lead to much fanfare: It appears to be a smart business move to enhance Siri’s capabilities. But it is also another sign of the increased role of AI in our daily lives. While the warnings and promises of AI aren’t new, advances in technology make them more pressing.

Forbes reported this month: “The vision of talking to your computer like in Star Trek and it fully understanding and executing those commands are about to become reality in the next 5 years.” Antoine Blondeau, CEO at Sentient Technologies Holdings, recently told Wired that in five years he expects “massive gains” for human efficiency as a result of artificial intelligence, especially in the fields of health care, finance, logistics and retail.

Blondeau further envisions the rise of “evolutionary intelligence agents,” that is, computers which “evolve by themselves – trained to survive and thrive by writing their own code—spawning trillions of computer programs to solve incredibly complex problems.””

Read the full article.

From Global News Canada: Former Greenpeace president supports biotechnology

“Patrick Moore says biotechnology is one of the reasons farmers in Western Canada can feed more than a hundred people from a single farm. The former president of Greenpeace Canada says it’s one of the reasons he supports biotechnology, along with the use of pesticides and machinery in producing crops.

 

“Less than 100 years ago it took about 75 per cent of the population to grow the food for a country, and that’s still true in some African and Asian countries,” Moore told Global News.

“But here we’re growing enough food for the whole population and exporting a great deal at the same time with two to three per cent of the population. One Saskatchewan farmer is feeding 155 people today because of science and technology,” said Moore.”

Read the full article.

From NASA: Oceanic Phytoplankton Declines

“The world’s oceans have seen significant declines in certain types of microscopic plant-life at the base of the marine food chain, according to a new NASA study. The research, published Sept. 23 in Global Biogeochemical Cycles, a journal of the American Geophysical Union, is the first to look at global, long-term phytoplankton community trends based on a model driven by NASA satellite data.

Diatoms, the largest type of phytoplankton algae, have declined more than 1 percent per year from 1998 to 2012 globally, with significant losses occurring in the North Pacific, North Indian and Equatorial Indian oceans. The reduction in population may reduce the amount of carbon dioxide drawn out of the atmosphere and transferred to the deep ocean for long-term storage.”

Read the full story.

About Environment

After transforming our environment to allow farming and burgeoning populations, how can we minimize negative impact on climate and biodiversity? 

Media

FAQ

Research Papers

Organizations

Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute; we are most grateful for the efforts that they have put into compiling it. These organizations above all work on environmental issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

 

About Artificial Intelligence


Most benefits of civilization stem from intelligence, so how can we enhance these benefits with artificial intelligence without being replaced on the job market and perhaps altogether? Future computer technology can bring great benefits, and also new risks, as described in the resources below.

Videos

Media Articles

Articles by AI Researchers

Research Papers

Case Studies

Books

Organizations

Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute; we are most grateful for the efforts that they have put into compiling it. These organizations above all work on computer technology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

 

About Biotechnology

Biotechnology 
How can we live longer and healthier lives while avoiding risks such as engineered pandemics? Future biotechnology can bring great benefits, and also new risks, as described in the resources below. 

Videos

Research Papers

Books

Organizations

These organizations above all work on biotechnology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

 

MIRI News: October 2015

MIRI’s October Newsletter collects recent news and links related to the long-term impact of artificial intelligence. Highlights:

— New introductory material on MIRI can be found on our information page.

— An Open Philanthropy Project update discusses investigations into global catastrophic risk and U.S. policy reform.

— “Research Suggests Human Brain Is 30 Times As Powerful As The Best Supercomputers.” Tech Times reports on new research by the AI Impacts project, which has “developed a preliminary method for comparing AI to a brain, which they call traversed edges per second, or TEPS. TEPS essentially determines how rapidly information is passed along a system.”

— MIRI research associates develop a new approach to logical uncertainty in software agents. “The main goal of logical uncertainty is to learn how to assign probabilities to logical sentences which have not yet been proven true or false. […] By giving the system a type of logical omniscience, you make it predictable, which allows you to prove things about it. However, there is another way to make it possible to prove things about a logical uncertainty system. We can take a program which assigns probabilities to sentences, and let it run forever. We can then ask about whether or not the system eventually gives good probabilities.”

— Tom Dietterich and Eric Horvitz discuss the rise of concerns about AI. “[W]e believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk.” See also Luke Muehlhauser’s response.

From UW Today: Oceans Releasing Frozen Methane

Bubble plumes off Washington, Oregon suggest warmer ocean may be releasing frozen methane

“Warming ocean temperatures a third of a mile below the surface, in a dark ocean in areas with little marine life, might attract scant attention. But this is precisely the depth where frozen pockets of methane ‘ice’ transition from a dormant solid to a powerful greenhouse gas.

New University of Washington research suggests that subsurface warming could be causing more methane gas to bubble up off the Washington and Oregon coast.

The study, to appear in the journal Geochemistry, Geophysics, Geosystems, a journal of the American Geophysical Union, shows that of 168 bubble plumes observed within the past decade a disproportionate number were seen at a critical depth for the stability of methane hydrates.”

Read the full article here.

AI safety at the United Nations

Nick Bostrom and I were invited to speak at the United Nations about how to avoid AI risk. I’d never been there before, and it was quite the adventure! Here’s the video – I start talking at 1:54:40 and Nick Bostrom at 2:14:30.

AI FAQ

$11M AI safety research program launched

Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research.

A new international grants program jump-starts tesearch to Ensure AI remains beneficial.

 

July 1, 2015
Amid rapid industry investment in developing smarter artificial intelligence, a new branch of research has begun to take off aimed at ensuring that society can reap the benefits of AI while avoiding potential pitfalls.

The Boston-based Future of Life Institute (FLI) announced the selection of 37 research teams around the world to which it plans to award about $7 million from Elon Musk and the Open Philanthropy Project as part of a first-of-its-kind grant program dedicated to “keeping AI robust and beneficial”. The program launches as an increasing number of high-profile figures including Bill Gates, Elon Musk and Stephen Hawking voice concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project led by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
  • A new Oxford-Cambridge research center for studying AI-relevant policy

As Skype founder Jaan Tallinn, one of FLI’s founders, has described this new research direction, “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”

When the Future of Life Institute issued an open letter in January calling for research on how to keep AI both robust and beneficial, it was signed by a long list of AI researchers from academia, nonprofits and industry, including AI research leaders from Facebook, IBM, and Microsoft and the founders of Google’s DeepMind Technologies. It was seeing that widespread agreement that moved Elon Musk to seed the research program that has now begun.

“Here are all these leading AI researchers saying that AI safety is important”, said Musk at the time. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

“I am glad to have an opportunity to carry this research focused on increasing the transparency of AI robotic systems,” said Manuela Veloso, past president of the Association for the Advancement of Artificial Intelligence (AAAI) and winner of one of the grants.

“This grant program was much needed: because of its emphasis on safe AI and multidisciplinarity, it fills a gap in the overall scenario of international funding programs,” added Prof. Francesca Rossi, president of the International Joint Conference on Artificial Intelligence (IJCAI), also a grant awardee.

Tom Dietterich, president of the AAAI, described how his grant — a project studying methods for AI learning systems to self-diagnose when failing to cope with a new situation — breaks the mold of traditional research:

“In its early days, AI research focused on the ‘known knowns’ by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the ‘known unknowns’ by using probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the ‘unknown unknowns’: How can an AI system behave carefully and conservatively in a world populated by unknown unknowns — aspects that the designers of the AI system have not anticipated at all?”

As Terminator Genisys debuts this week, organizers stressed the importance of separating fact from fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, said FLI president Max Tegmark. “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”

The full list of research grant winners can be found here. The plan is to fund these teams for up to three years, with most of the research projects starting by September 2015, and to focus the remaining $4M of the Musk-backed program on the areas that emerge as most promising.

FLI has a mission to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

Contacts at the Future of Life Institute:

  • Max Tegmark: tegmark@mit.edu
  • Meia Chita-Tegmark: meia@bu.edu
  • Jaan Tallinn: jaan@futureoflife.org
  • Anthony Aguirre: aguirre@scipp.ucsc.edu
  • Viktoriya Krakovna: vika@futureoflife.org
  • Jesse Galef: jesse@futureoflife.org

AI safety conference in Puerto Rico

The Future of AI: Opportunities and Challenges 

This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls (see this open letter and this list of research priorities). To facilitate candid and constructive discussions, there was no media present and Chatham House Rules: nobody’s talks or statements will be shared without their permission.
Where? San Juan, Puerto Rico
When? Arrive by evening of Friday January 2, depart after lunch on Monday January 5 (see program below)

 

conference-1

Scientific organizing committee:

  • Erik Brynjolfsson, MIT, Professor at the MIT Sloan School of Management, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
  • Demis Hassabis, Founder, DeepMind
  • Eric Horvitz, Microsoft, co-chair of the AAAI presidential panel on long-term AI futures
  • Shane Legg, Founder, DeepMind
  • Peter Norvig, Google, Director of Research, co-author of the standard textbook Artificial Intelligence: a Modern Approach.
  • Francesca Rossi, Univ. Padova, Professor of Computer Science, President of the International Joint Conference on Artificial Intelligence
  • Stuart Russell, UC Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.
  • Bart Selman, Cornell University, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures
  • Murray Shanahan, Imperial College, Professor of Cognitive Robotics
  • Mustafa Suleyman, Founder, DeepMind
  • Max Tegmark, MIT, Professor of physics, author of Our Mathematical Universe

Local Organizers:
Anthony Aguirre, Meia Chita-Tegmark, Viktoriya Krakovna, Janos Kramar, Richard Mallah, Max Tegmark, Susan Young

Support: Funding and organizational support for this conference is provided by Skype-founder Jaan Tallinnthe Future of Life Institute and the Center for the Study of Existential Risk.

PROGRAM

Friday January 2:
1600-late: Registration open
1930-2130: Welcome reception (Las Olas Terrace)

Saturday January 3:
0800-0900: Breakfast
0900-1200: Overview (one review talk on each of the four conference themes)
• Welcome
• Ryan Calo (Univ. Washington): AI and the law
• Erik Brynjolfsson (MIT): AI and economics (pdf)
• Richard Sutton (Alberta): Creating human-level AI: how and when? (pdf)
• Stuart Russell (Berkeley): The long-term future of (artificial) intelligence (pdf)
1200-1300: Lunch
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Optimizing the economic impact of AI
(A typical 3-hour session consists of a few 20-minute talks followed by a discussion panel where the panelists who haven’t already given talks get to give brief introductory remarks before the general discussion ensues.)
What can we do now to maximize the chances of reaping the economic bounty from AI while minimizing unwanted side-effects on the labor market?
Speakers:
• Andrew McAfee, MIT (pdf)
• James Manyika, McKinsey (pdf)
• Michael Osborne, Oxford (pdf)
Panelists include Ajay Agrawal (Toronto), Erik Brynjolfsson (MIT), Robin Hanson (GMU), Scott Phoenix (Vicarious)
1900: dinner

Sunday January 4:
0800-0900: Breakfast
0900-1200: Creating human-level AI: how and when?
Short talks followed by panel discussion: will it happen, and if so, when? Via engineered solution, whole brain emulation, or other means? (We defer until the 4pm session questions regarding what will happen, about whether machines will have goals, about ethics, etc.)
Speakers:
• Demis Hassabis, Google/DeepMind
• Dileep George, Vicarious (pdf)
• Tom Mitchell, CMU (pdf)
Panelists include Joscha Bach (MIT), Francesca Rossi (Padova), Richard Mallah (Cambridge Semantics), Richard Sutton (Alberta)
1200-1300: Lunch
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Intelligence explosion: science or fiction?
If an intelligence explosion happens, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? Containment problem? Is “friendly AI” possible? Feasible? Likely to happen?
Speakers:
• Nick Bostrom, Oxford (pdf)
• Bart Selman, Cornell (pdf)
• Jaan Tallinn, Skype founder (pdf)
• Elon Musk, SpaceX, Tesla Motors
Panelists include Shane Legg (Google/DeepMind), Murray Shanahan (Imperial), Vernor Vinge (San Diego), Eliezer Yudkowsky (MIRI)
1930: banquet (outside by beach)

Monday January 5:
0800-0900: Breakfast
0900-1200: Law & ethics: Improving the legal framework for autonomous systems
How should legislation be improved to best protect the AI industry and consumers? If self-driving cars cut the 32000 annual US traffic fatalities in half, the car makers won’t get 16000 thank-you notes, but 16000 lawsuits. How can we ensure that autonomous systems do what we want? And who should be held liable if things go wrong? How tackle criminal AI? AI ethics? AI ethics/legal framework for military systems & financial systems?
Speakers:
• Joshua Greene, Harvard (pdf)
• Heather Roff Perkins, Univ. Denver (pdf)
• David Vladeck, Georgetown
Panelists include Ryan Calo (Univ. Washington), Tom Dietterich (Oregon State, AAAI president), Kent Walker (General Counsel, Google)
1200: Lunch, depart

PARTICIPANTS
You’ll find a list of participants and their bios here.

conference150104 hires

Back row, from left to right: Tom Mitchell, Seán Ó hÉigeartaigh, Huw Price, Shamil Chandaria, Jaan Tallinn, Stuart Russell, Bill Hibbard, Blaise Agüera y Arcas, Anders Sandberg, Daniel Dewey, Stuart Armstrong, Luke Muehlhauser, Tom Dietterich, Michael Osborne, James Manyika, Ajay Agrawal, Richard Mallah, Nancy Chang, Matthew Putman
Other standing, left to right: Marilyn Thompson, Rich Sutton, Alex Wissner-Gross, Sam Teller, Toby Ord, Joscha Bach, Katja Grace, Adrian Weller, Heather Roff-Perkins, Dileep George, Shane Legg, Demis Hassabis, Wendell Wallach, Charina Choi, Ilya Sutskever, Kent Walker, Cecilia Tilli, Nick Bostrom, Erik Brynjolfsson, Steve Crossan, Mustafa Suleyman, Scott Phoenix, Neil Jacobstein, Murray Shanahan, Robin Hanson, Francesca Rossi, Nate Soares, Elon Musk, Andrew McAfee, Bart Selman, Michele Reilly, Aaron VanDevender, Max Tegmark, Margaret Boden, Joshua Greene, Paul Christiano, Eliezer Yudkowsky, David Parkes, Laurent Orseau, JB Straubel, James Moor, Sean Legassick, Mason Hartman, Howie Lempel, David Vladeck, Jacob Steinhardt, Michael Vassar, Ryan Calo, Susan Young, Owain Evans, Riva-Melissa Tez, János Kramár, Geoff Anders, Vernor Vinge, Anthony Aguirre
Seated: Sam Harris, Tomaso Poggio, Marin Soljačić, Viktoriya Krakovna, Meia Chita-Tegmark
Behind the camera: Anthony Aguirre (and also photoshopped in by the human-level intelligence on his left)
Click  for a full resolution version.

Elon Musk donates $10M to keep AI beneficial

Thursday January 15, 2015

We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk’s donation aims to support precisely this type of research: “Here are all these leading AI researchers saying that AI safety is important”, says Elon Musk. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

Musk’s announcement was welcomed by AI leaders in both academia and industry:

“It’s wonderful, because this will provide the impetus to jump-start research on AI safety”, said AAAI president Tom Dietterich. “This addresses several fundamental questions in AI research that deserve much more funding than even this donation will provide.”

“Dramatic advances in artificial intelligence are opening up a range of exciting new applications”, said Demis Hassabis, Shane Legg and Mustafa Suleyman, co-founders of DeepMind Technologies, which was recently acquired by Google. “With these newfound powers comes increased responsibility. Elon’s generous donation will support researchers as they investigate the safe and ethical use of artificial intelligence, laying foundations that will have far reaching societal impacts as these technologies continue to progress”.


Elon Musk and AAAI President Thomas Dietterich comment on the announcement
The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. “I love technology, because it’s what’s made 2015 better than the stone age”, says MIT professor and FLI president Max Tegmark. “Our organization studies how we can maximize the benefits of future technologies while avoiding potential pitfalls.”

The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy (a detailed list of examples can be found here). “Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere”, says FLI co-founder Viktoriya Krakovna.

“This donation will make a major impact”, said UCSC professor and FLI co-founder Anthony Aguirre: “While heavy industry and government investment has finally brought AI from niche academic research to early forms of a potentially world-transforming technology, to date relatively little funding has been available to help ensure that this change is actually a net positive one for humanity.”

“That AI systems should be beneficial in their effect on human society is a given”, said Stuart Russell, co-author of the standard AI textbook “Artificial Intelligence: a Modern Approach”. “The research that will be funded under this program will make sure that happens. It’s an intrinsic and essential part of doing AI research.”

Skype-founder Jaan Tallinn, one of FLI’s founders, agrees: “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”

Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories.

“Hopefully this grant program will help shift our focus from building things just because we can, toward building things because they are good for us in the long term”, says FLI co-founder Meia Chita-Tegmark.

Contacts at Future of Life Institute:

  • Max Tegmark: max@futureoflife.org
  • Meia Chita-Tegmark: meia@futureoflife.org
  • Jaan Tallinn: jaan@futureoflife.org
  • Anthony Aguirre: anthony@futureoflife.org
  • Viktoriya Krakovna: vika@futureoflife.org

Contacts among AI researchers:

  • Prof. Tom Dietterich, President of the Association for the Advancement of Artificial Intelligence (AAAI), Director of Intelligent Systems: tgd@eecs.oregonstate.edu
  • Prof. Stuart Russell, Berkeley, Director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach: russell@cs.berkeley.edu
  • Prof. Bart Selman, co-chair of the AAAI presidential panel on long-term AI futures: selman@cs.cornell.edu
  • Prof. Francesca Rossi, Professor of Computer Science, University of Padova and Harvard University, president of the International Joint Conference on Artificial Intelligence (IJCAI): frossi@math.unipd.it
  • Prof. Murray Shanahan, Imperial College: m.shanahan@imperial.ac.uk


Max Tegmark interviews Elon Musk about his life, his interest in the future of humanity and the background to his donation

The Power to Remake a Species

 


Once started, a carefully implemented gene drive could
eradicate the entire malaria-causing
Anopheles species of mosquito.

In 2013, some 200 million humans suffered from malaria, and an estimated 584,000 of them died, 90 percent in Africa. The vast majority of those killed were children under age 5. Decades of research have fallen short of a vaccine for this scourge. A powerful new technique that allows scientists to selectively edit entire genomes could provide a solution, but it also poses risks—and ethical questions science is only beginning to address.

The technique relies on a tool called a gene drive, something scientists have discussed since 2003 but which has only recently become possible. A gene drive greatly increases the odds that a particular gene will be inherited by all future generations. Genes occasionally evolve the ability naturally, but if we could engineer it deliberately, small interventions could have enormous impact, giving scientists the power to eradicate diseases, remove invasive species, and wholly remake the natural landscape.

One proposed use of a gene drive would alter the genetic code of a few mosquitoes that carry the malaria parasite, ensuring that the ‘Y’ chromosome would always be passed on. The result is a male-only line that systematically topples the population’s gender balance. Once started, a carefully implemented gene drive could eradicate the entire malaria-causing Anopheles species.

“Its advantage over vaccines is that you don’t have to go out and inject every person at risk,” says George Church, a geneticist at the Wyss Institute at Harvard Medical School. “You simply have to introduce a small number of mosquitoes into the wild, and they do all the work. They become your foot soldiers, or your cadre of nurses.”

The question becomes ‘Should we?’ rather than ‘Can we?’
To what extent do scientists have the right to work on
problems where, if they screw up, it could affect us all?”
– Kevin Esvelt

But because gene drives spread the adaptation throughout an entire population, some scientists are concerned that the technology is advancing before we have a conversation about the best ways to use it wisely – and safely.

“Of all the species that cause human suffering, the malarial mosquito is arguably number one,” says Kevin Esvelt, a researcher at the Wyss Institute. “If a gene drive would allow us to eradicate malaria the way we eradicated smallpox, that’s a possibility we at least need to consider. At the same time, this raises questions of, who gets to decide? Given the urgency of problems like malaria, we should probably be talking about it now.”

The Machinery of Gene Drives


George Church, Wyss Institute
Harvard Medical School

Interest in gene drives’ potential has intensified since 2012, when scientists developed the gene-editing technique known as CRISPR (for DNA sequences called clustered regularly interspaced short palindromic repeats). Derived from a bacterial defense strategy, CRISPR is a search, cut-and-paste system that works in any cell. It uses an enzyme to home in on a specific nucleotide sequence, slice it, and replace it with others of the scientists’ choosing. CRISPR is cheap and precise, making gene drives viable.

In normal sexual reproduction, offspring inherit a random half of genes from each parent. By encoding the CRISPR editing machinery in a genome along with whatever new trait you’d want to include, you would ensure that any offspring not only have the new mutation, but the tools to give that same trait to the next generation, and so on. The gene then drives through an entire population exponentially.

“It would be as if in your family, all of your daughters and sons insisted that all of their daughters and sons would have the same last name. Then your name would spread throughout the population,” Church says.

Mosquitoes reproduce quickly, making them an ideal target for CRISPR modification, Esvelt says. If the mutation reduced mosquitoes’ offspring or rendered males sterile, the population could be wiped out in a single season—along with the parasite that causes malaria.

This would require more work to isolate the genes involved, however. The technique also needs to become more efficient, Church says. The enzyme that hunts down target DNA sequences sometimes misses its mark, which could introduce unintended—and harmful—changes in the genome that spread throughout the species.

Genetic Upgrades or Risks

These ethical and safety concerns came into stark relief in April, when Chinese scientists reported editing the genomes of human embryos. The embryos were already non-viable, meaning they would not have resulted in live birth. CRISPR-mediated changes to their chromosomes were unsuccessful, and resulted in several off-target mutations, according to the researchers, led by Junjiu Huang at Sun Yat-sen University in Guangzhou, China. The paper set off a firestorm of controversy and calls for a moratorium on such research, including from the National Academies and the White House Office of Science and Technology Policy.


As we develop the power to remake a species, the question
becomes how to best use it without causing a cascade
of unintended consequences.

“We should be very concerned about the prospect of using these gene editing techniques for altering traits that are passed on,” says Marcy Darnovsky, executive director of the Center for Genetics and Society, a California-based nonprofit. “If you think about how that could—or perhaps would—likely play out in the social, political, and commercial environment in which we all live, it’s easy to see how you could get into hot water pretty quickly. You could have wealthy people purchasing genetic upgrades for their children, for instance. It sounds like science fiction.”

Even if gene drives are only used in pests, ethical questions still loom large. Eliminating a whole mosquito species could make way for new pests, or disrupt predators who feast on the insects, Esvelt says. And there could be human consequences, too. David Gurwitz, a neuroscientist at Tel Aviv University in Israel, wrote in an August 2014 letter to the journal Science that gene drives could also be used for nefarious purposes.

“Just as gene drives can make mosquitoes unfit for hosting and spreading the malaria parasite, they could conceivably be designed with gene drives carrying cargo for delivering lethal bacteria toxins to humans. Other scary scenarios, such as targeted attacks on major crop plants, could also be envisaged,” he wrote. He called for elements of CRISPR editing techniques to stay out of the scientific literature. In an email, he said he is “amazed at the lack of public discussion” to date on gene drive use.

Offense vs. Defense


Kevin Esvelt,
Wyss Institute
Harvard Medical School

In part because it is so inexpensive—the reagents and plasmid DNA used in CRISPR modification can be had for under $100, Church says—the method has spread through labs around the world like wildfire. “It’s hard to keep people from doing things that are simple and cheap,” he says. It has been shown to work in at least 30 organisms, according to Esvelt. As CRISPR use becomes more common, Esvelt, Church and their colleagues have aimed to develop ways to ensure its safety.

If one modified organism escapes from a lab and is able to breed with a wild relative, its altered gene would quickly spread through the entire population, making containment especially important. In April 2014, Esvelt, Church and a team of scientists published a commentary in Science suggesting methods for preventing accidental gene drive releases, such as conducting experiments on malarial mosquitoes in climates where no Anopheles relatives live. Esvelt also suggests CRISPR itself could be used to reverse an accidental release, by simply undoing the edit.

In April 2015, Church and colleagues and a separate team led by Alexis Rovner and Farren Isaccs of Yale University reported two new ways to generate modified organisms that could never survive outside a lab. Both approaches make the altered organism dependent on an unnatural amino acid they could never obtain in the wild.

If one modified organism escapes from a lab
and is able to breed with a wild relative, its
altered gene would quickly spread through the
entire population, making containment
especially important.

Turning this concept inside out could yield engineered pests or weeds that succumb to natural substances that don’t harm anything else, Esvelt says. Instead of modifying crops to resist a broad-spectrum herbicide, for instance, gene drives could modify the weeds themselves: “You could create a vulnerability that did not previously exist to a compound that would not harm any other living thing.”

But any containment methods would have to follow the law, which remains murky, he says. Absent a national policy—which Darnovsky says should come from Congress—scientists should be talking about how, and when, CRISPR should be used.

“The question becomes ‘Should we?’ rather than ‘Can we?’” Esvelt says. “To what extent do scientists have the right to work on problems where, if they screw up, it could affect us all?”

Darnovsky, who notes that scientists have only just begun to understand the machinery of life as it evolved over millions of years, argues that scientists should not monopolize discussions about the use of CRISPR.

“We need to develop habits of mind, or habits of social interaction, that will allow for some very robust public participation on the use of these very powerful technologies,” she says. “It’s the future of life. It’s an issue that affects everybody.”

FHI: Putting Odds on Humanity’s Extinction

Putting Odds on Humanity’s Extinction
The Team Tasked With Predicting-and Preventing-Catastrophe
by Carinne Piekema
May 13, 2015

Bookmark and Share

Not long ago, I drove off in my car to visit a friend in a rustic village in the English countryside. I didn’t exactly know where to go, but I figured it didn’t matter because I had my navigator at the ready. Unfortunately for me, as I got closer, the GPS signal became increasingly weak and eventually disappeared. I drove around aimlessly for a while without a paper map, cursing my dependence on modern technology.


It may seem gloomy to be faced with a graph that predicts the
potential for extinction, but the FHI researchers believe it can
stimulate people to start thinking—and take action.

But as technology advances over the coming years, the consequences of it failing could be far more troubling than getting lost. Those concerns keep the researchers at the Future of Humanity Institute (FHI) in Oxford occupied—and the stakes are high. In fact, visitors glancing at the white boards surrounding the FHI meeting area would be confronted by a graph estimating the likelihood that humanity dies out within the next 100 years. Members of the Institute have marked their personal predictions, from some optimistic to some seriously pessimistic views estimating as high as a 40% chance of extinction. It’s not just the FHI members: at a conference held in Oxford some years back, a group of risk researchers from across the globe suggested the likelihood of such an event is 19%. “This is obviously disturbing, but it still means that there would be 81% chance of it not happening,” says Professor Nick Bostrom, the Institute’s director.

That hope—and challenge—drove Bostrom to establish the FHI in 2005. The Institute is devoted precisely to considering the unintended risks our technological progress could pose to our existence. The scenarios are complex and require forays into a range of subjects including physics, biology, engineering, and philosophy. “Trying to put all of that together with a detailed attempt to understand the capabilities of what a more mature technology would unleash—and performing ethical analysis on that—seemed like a very useful thing to do,” says Bostrom.

Far from being bystanders in the face
of apocalypse, the FHI researchers are
working hard to find solutions.

In that view, Bostrom found an ally in British-born technology consultant and author James Martin. In 2004, Martin had donated approximately $90 million US dollars—one of the biggest single donations ever made to the University of Oxford—to set up the Oxford Martin School. The school’s founding aim was to address the biggest questions of the 21st Century, and Bostrom’s vision certainly qualified. The FHI became part of the Oxford Martin School.

Before the FHI came into existence, not much had been done on an organised scale to consider where our rapid technological progress might lead us. Bostrom and his team had to cover a lot of ground. “Sometimes when you are in a field where there is as yet no scientific discipline, you are in a pre-paradigm phase: trying to work out what the right questions are and how you can break down big, confused problems into smaller sub-problems that you can then do actual research on,” says Bostrom.

Though the challenge might seem like a daunting task, researchers at the Institute have a host of strategies to choose from. “We have mathematicians, philosophers, and scientists working closely together,” says Bostrom. “Whereas a lot of scientists have kind of only one methodology they use, we find ourselves often forced to grasp around in the toolbox to see if there is some particular tool that is useful for the particular question we are interested in,” he adds. The diverse demands on their team enable the researchers to move beyond “armchair philosophising”—which they admit is still part of the process—and also incorporate mathematical modelling, statistics, history, and even engineering into their work.

“We can’t just muddle through and learn
from experience and adapt. We have to
anticipate and avoid existential risk.
We only have one chance.”
– Nick Bostrom

Their multidisciplinary approach turns out to be incredibly powerful in the quest to identify the biggest threats to human civilisation. As Dr. Anders Sandberg, a computational neuroscientist and one of the senior researchers at the FHI explains: “If you are, for instance, trying to understand what the economic effects of machine intelligence might be, you can analyse this using standard economics, philosophical arguments, and historical arguments. When they all point roughly in the same direction, we have reason to think that that is robust enough.”

The end of humanity?

Using these multidisciplinary methods, FHI researchers are finding that the biggest threats to humanity do not, as many might expect, come from disasters such as super volcanoes, devastating meteor collisions or even climate change. It’s much more likely that the end of humanity will follow as an unintended consequence of our pursuit of ever more advanced technologies. The more powerful technology gets, the more devastating it becomes if we lose control of it, especially if the technology can be weaponized. One specific area Bostrom says deserves more attention is that of artificial intelligence. We don’t know what will happen as we develop machine intelligence that rivals—and eventually surpasses—our own, but the impact will almost certainly be enormous. “You can think about how the rise of our species has impacted other species that existed before—like the Neanderthals—and you realise that intelligence is a very powerful thing,” cautions Bostrom. “Creating something that is more powerful than the human species just seems like the kind of thing to be careful about.”


Nick Bostrom, Future of Humanity Institute Director

Far from being bystanders in the face of apocalypse, the FHI researchers are working hard to find solutions. “With machine intelligence, for instance, we can do some of the foundational work now in order to reduce the amount of work that remains to be done after the particular architecture for the first AI comes into view,” says Bostrom. He adds that we can indirectly improve our chances by creating collective wisdom and global access to information to allow societies to more rapidly identify potentially harmful new technological advances. And we can do more: “There might be ways to enhance biological cognition with genetic engineering that could make it such that if AI is invented by the end of this century, might be a different, more competent brand of humanity ,” speculates Bostrom.

Perhaps one of the most important goals of risk researchers for the moment is to raise awareness and stop humanity from walking headlong into potentially devastating situations. And they are succeeding. Policy makers and governments around the globe are finally starting to listen and actively seek advice from researchers like those at the FHI. In 2014 for instance, FHI researchers Toby Ord and Nick Beckstead wrote a chapter for the Chief Scientific Adviser’s annual report setting out how the government in the United Kingdom should evaluate and deal with existential risks posed by future technology. But the FHI’s reach is not limited to the United Kingdom. Sandberg was on the advisory board of the World Economic Forum to give guidance on the misuse of emerging technologies for the report that concludes a decade of global risk research published this year.

Despite the obvious importance of their work the team are still largely dependent on private donations. Their multidisciplinary and necessarily speculative work does not easily fall into the traditional categories of priority funding areas drawn up by mainstream funding bodies. In presentations, Bostrom has been known to show a graph that depicts academic interest for various topics, from dung beetles and Star Trek to zinc oxalate, which all appear to receive far greater credit than the FHI’s type of research concerning the continued existence of humanity. Bostrom laments this discrepancy between stakes and attention: “We can’t just muddle through and learn from experience and adapt. We have to anticipate and avoid existential risk. We only have one chance.”


“Creating something that is more powerful than the human
species just seems like the kind of thing to be careful about.”

It may seem gloomy to be faced every day with a graph that predicts the potential disasters that could befall us over the coming century, but instead, the researchers at the FHI believe that such a simple visual aid can stimulate people to face up to the potentially negative consequences of technological advances.

Despite being concerned about potential pitfalls, the FHI researchers are quick to agree that technological progress has made our lives measurably better over the centuries, and neither Bostrom nor any of the other researchers suggest we should try to stop it. “We are getting a lot of good things here, and I don’t think I would be very happy living in the Middle Ages,” says Sandberg, who maintains an unflappable air of optimism. He’s confident that we can foresee and avoid catastrophe. “We’ve solved an awful lot of other hard problems in the past,” he says.

Technology is already embedded throughout our daily existence and its role will only increase in the coming years. But by helping us all face up to what this might mean, the FHI hopes to allow us not to be intimidated and instead take informed advantage of whatever advances come our way. How does Bostrom see the potential impact of their research? “If it becomes possible for humanity to be more reflective about where we are going and clear-sighted where there may be pitfalls,” he says, “then that could be the most cost-effective thing that has ever been done.”

CSER: Playing with Technological Dominoes

Playing with Technological Dominoes
Advancing Research in an Era When Mistakes Can Be Catastrophic
by Sophie Hebden
April 7, 2015

Bookmark and Share

The new Centre for the Study of Existential Risk at Cambridge University isn’t really there, at least not as a physical place—not yet. For now, it’s a meeting of minds, a network of people from diverse backgrounds who are worried about the same thing: how new technologies could cause huge fatalities and even threaten our future as a species. But plans are coming together for a new phase for the centre to be in place by the summer: an on-the-ground research programme.


We learn valuable information by creating powerful
viruses in the lab, but risk a pandemic if an accident
releases it. How can we weigh the costs and benefits?

Ever since our ancestors discovered how to make sharp stones more than two and a half million years ago, our mastery of tools has driven our success as a species. But as our tools become more powerful, we could be putting ourselves at risk should they fall into the wrong hands— or if humanity loses control of them altogether. Concerned with bioengineered viruses, unchecked climate change, and runaway artificial intelligence? These are the challenges the Centre for the Study of Existential Risk (CSER) was founded to grapple with.

At its heart, CSER is about ethics and the value you put on the lives of future, unborn people. If we feel any responsibility to the billions of people in future generations, then a key concern is ensuring that there are future generations at all.

The idea for the CSER began as a conversation between a philosopher and a software engineer in a taxi. Huw Price, currently the Bertrand Russell Professor of Philosophy at Cambridge University, was on his way to a conference dinner in Copenhagen in 2011. He happened to share his ride with another conference attendee: Skype’s co-founder Jaan Tallinn.

“I thought, ’Oh that’s interesting, I’m in a taxi with one of the founders of Skype’ so I thought I’d better talk to him,” joked Price. “So I asked him what he does these days, and he explained that he spends a lot of his time trying to persuade people to pay more attention to the risk that artificial intelligence poses to humanity.”

“The overall goal of CSER is to write
a manual for managing and ameliorating
these sorts of risks in future.”
– Huw Price

In the past few months, numerous high-profile figures—including the founders of Google’s DeepMind machine-learning program and IBM’s Watson team—have been voicing concerns about the potential for high-level AI to cause unintended harms. But in 2011, it was startling for Price to find someone so embedded and successful in the computer industry taking AI risk seriously. He met privately with Tallinn shortly afterwards.

Plans came to fruition later at Cambridge when Price spoke to astronomer Martin Rees, the UK’s Astronomer Royal—a man well-known for his interest in threats to the future of humanity. The two made plans for Tallinn to come to the University to give a public lecture, enabling the three to meet. It was at that meeting that they agreed to establish CSER.

Price traces the start of CSER’s existence—at least online—to its website launch in June 2012. Under Rees’ influence, it quickly took on a broad range of topics, including the risks posed by synthetic biology, runaway climate change, and geoengineering.


Huw Price

“The overall goal of CSER,” says Price, painting the vision for the organisation with broad brush strokes, “Is to write a manual, metaphorically speaking, for managing and ameliorating these sorts of risks in future.”

In fact, despite its rather pessimistic-sounding emphasis on risks, CSER is very much pro-technology: if anything, it wants to help developers and scientists make faster progress, declares Rees. “The buzzword is ’responsible innovation’,” he says. “We want more and better-directed technology.”

Its current strategy is to use all its reputational power—which is considerable, as a Cambridge University institute—to gather experts together to decide on what’s needed to understand and reduce the risks. Price is proud of CSER’s impressive set of board members, which includes the world-famous theoretical physicist Stephen Hawking, as well as world leaders in AI, synthetic biology and economic theory.

He is frank about the plan: “We deliberately built an advisory board with a strong emphasis on people who are extremely well-respected to counter any perception of flakiness that these risks can have.”

The plan is working, he says. “Since we began to talk about AI risk there’s been a very big change in attitude. It’s become much more of a mainstream topic than it was two years ago, and that’s partly thanks to CSER.”

Even on more well-known subjects, CSER calls attention to new angles and perspectives on problems. Just last month, it launched a monthly seminar series by hosting a debate on the benefits and risks of research into potential pandemic pathogens.

The seminar focused on a controversial series of experiments by researchers in the Netherlands and the US to try to make the bird flu virus H5N1 transmissible between humans. By adding mutations to the virus they found it could transmit through the air between ferrets—the animal closest to humans when modelling the flu.

The answer isn’t “let’s shout at each
other about whether someone’s going
to destroy the world or not.” The right
answer is, “let’s work together to
develop this safely.”
– Sean O’hEigeartaigh, CSER Executive Director

Epidemiologist Marc Lipsitch of Harvard University presented his calculations of the ’unacceptable’ risk that such research poses, whilst biologist Derek Smith of Cambridge University, who was a co-author on the original H5N1 study, argued why such research is vitally important.

Lipsitch explained that although the chance of an accidental release of the virus is low, any subsequent pandemic could kill more than a billion people. When he combined the risks with the costs, he found that each laboratory doing a single year of research is the equivalent of causing at least 2,000 fatalities. He considers this risk unacceptable. Even if he’s only right within a factor of 1,000, he later told me, then the research is too dangerous.

Smith argued that we can’t afford not to do this research, that knowledge is power—in this case the power to understand the importance of the mutations and how effective our vaccines are at preventing further infections. Research, he said, is essential for understanding whether we need to start “spending millions on preparing for a pandemic that could easily arise naturally—for instance by stockpiling antiviral treatments or culling poultry in China.”

CSER’s seminar series brings the top minds to Cambridge to grapple with important questions like these. The ideas and relationships formed at such events grow into future workshops that then beget more ideas and relationships, and the network grows. Whilst its links across the Atlantic are strongest, CSER is also keen to pursue links with European researchers. “Our European links seem particularly interested in the bio-risk side,” says Price.


Sean O’hEigeartaigh

The scientific attaché to Germany’s government approached CSER in October 2013, and in September 2014 CSER co-organised a meeting with Germany on existential risk. This led to two other workshops on managing risk in biotechnology and research into flu transmission—the latter hosted by Volkswagen in December 2014.

In addition to working with governments, CSER also plans to sponsor visits from researchers and leaders in industry, exchanging a few weeks of staff time for expert knowledge at the frontier of developments. It’s an interdisciplinary venture to draw together and share different innovators’ ideas about the extent and time-frames of risks. The larger the uncertainties, the bigger the role CSER can play in canvassing opinion and researching the risk.

“It’s fascinating to me when the really top experts disagree so much,” says Sean O’hEigeartaigh, CSER’s Executive Director. Some leading developers estimate that human-level AI will be achieved within 30-40 years, whilst others think it will take as long as 300 years. “When the stakes are so high, as they are for AI and synthetic biology, that makes it even more exciting,” he adds.

Despite its big vision and successes, CSER’s path won’t be easy. “There’s a misconception that if you set up a centre with famous people then the University just gives you money; that’s not what happens,” says O’hEigeartaigh.

Instead, they’ve had to work at it, and O’hEigartaigh was brought on board in November 2012 to help grow the organization. Through a combination of grants and individual donors, he has attracted enough funding to install three postdocs, who will be in place by the summer of 2015. Some major grants are in the works, and if all goes well, CSER will be a considerably larger team in the next year.

With a research team on the ground, Price envisions a network of subprojects working on different aspects: listening to experts’ concerns, predicting the timescales and risks more accurately through different techniques, and trying to reduce some of the uncertainties—even a small reduction will help.

Rees believes there’s still a lot of awareness-raising work to do ’front-of-house’: he wants to see the risks posed by AI and synthetic biology become as mainstream as climate change, but without so much of the negativity.

“The answer isn’t ’let’s shout at each other about whether someone’s going to destroy the world or not’,” says O’hEigeartaigh. “The right answer is, ’let’s work together to develop this safely’.” Remembering the animated conversations in the foyer that buzzed with excitement following CSER’s seminar, I feel optimistic: it’s good to know some people are taking our future seriously.

GCRI: Aftermath

Aftermath
Finding practical paths to recovery after a worldwide catastrophe.
by Steven Ashley
March 13, 2015

Bookmark and Share


Tony Barrett
Global Catastrophic Risk Institute

OK, we survived the cataclysm. Now what?

In recent years, warnings by top scientists and industrialists have energized research into the sort of civilization-threatening calamities that are typically the stuff of sci-fi and thriller novels: asteroid impacts, supervolcanoes, nuclear war, pandemics, bioterrorism, even the rise of a super-smart, but malevolent artificial intelligence.

But what comes afterward? What happens to the survivors? In particular, what will they eat? How will they stay warm and find electricity? How will they rebuild and recover?

These “aftermath” issues comprise some of largest points of uncertainty regarding humanity’s gravest threats. And as such they constitute some of the principal research focuses of the Global Catastrophic Risk Institute (GCRI), a nonprofit think tank that Seth Baum and Tony Barrett founded in late 2011. Baum, a New York City-based engineer and geographer, is GCRI’s executive director. Barrett, who serves as its director of research, is a senior risk analyst at ABS Consulting in Washington, DC, which performs probabilistic risk assessment and other services.

Black Swan Events

At first glance, it may sound like GCRI is making an awful lot of fuss about dramatic worst-case scenarios that are unlikely to pan out any time soon. “In any given year, there’s only a small chance that one of these disasters will occur,” Baum concedes. But the longer we wait, he notes, the greater the chance that we will experience one of these “Black Swan events” (so called because before a black swan was spotted by an explorer in the seventeenth century, it was taken for granted that these birds did not exist). “We’re trying to instil a sense of urgency in governments and society in general that these risks need to be faced now to keep the world safe,” Baum says.

GCRI’s general mission is to find ways to mobilize the world’s thinkers to identify the really big risks facing the planet, how they might cooperate for optimal effect, and the best approaches to addressing the threats. The institute has no physical base, but it serves as a virtual hub, assembling “the best empirical data and the best expert judgment,” and rolling them into risk models that can help guide our actions, Barrett says. Researchers, brought together through GCRI, often collaborate remotely. Judging the real risks posed by these low-odds, high-consequence events is no simple task, he says: “In most cases, we are dealing with extremely sparse data sets about occurrences that seldom, if ever, happened before.”


Feeding Everyone No Matter What
Following a cataclysm that blocks out the sun, what will survivors eat?
Credit: J M Gehrke

Beyond ascertaining which global catastrophes are most likely to occur, GCRI seeks to learn how multiple events might interact. For instance, could a nuclear disaster lead to a change in climate that cuts food supplies while encouraging a pandemic caused by the loss of medical resources? “To best convey these all-too-real risks to various sectors of society, it’s not enough to merely characterize them,” Baum says. Tackling such multi-faceted scenarios requires an interdisciplinary approach that would enable GCRI experts to recognize potential shared mitigation strategies that could enhance the chances of recovery, he adds.

One of the more notable GCRI projects focuses on the aftermath of calamity. This analysis was conducted by research associate Dave Denkenberger, who is an energy efficiency engineer at Ecova, an energy and utility management firm in Durango, Colorado. Together with engineer Joshua M. Pearce, of Michigan Technological University in Houghton, he looked at a key issue: If one of these catastrophes does occur, how do we feed the survivors?

Worldwide, people currently eat about 1.5 billion tons of food a year. For a book published in 2014, Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, the pair researched alternative food sources that could be ramped up within five or fewer years following a disaster that involves a significant change in climate. In particular, the discussion looks at what could be done to feed the world should the climate suffer from an abrupt, single-decade drop in temperature of about 10°C that wipes out crops regionally, reducing food supplies by 10 per cent. This phenomenon has already occurred many times in the past.

Sun Block

Even more serious are scenarios that block the sun, which could cause a 10°C temperature drop globally in only a single year or so. Such a situation could arise should smoke enter the stratosphere from a nuclear winter resulting from an atomic exchange that burns big cities, an asteroid or comet impact, or a supervolcano eruption such as what may one day occur at Yellowstone National Park.

These risks need to be faced
now to keep the world safe.
– Seth Baum

Other similar, though probably less likely, scenarios, Denkenberger says, might derive from the spread of some crop-killing organism—a highly invasive superweed, a superbacterium that displaces beneficial bacteria, a virulent pathogenic bacterium, or a super pest (an insect). Any of these might happen naturally, but they could be even more serious should they result from a coordinated terrorist attack.

“Our approach is to look across disciplines to consider every food source that’s not dependent on the sun,” Denkenberger explains. The book considers various ways of converting vegetation and fossil fuels to edible food. The simplest potential solution may be to grow mushrooms on the dead trees, “but you could do much the same by using enzymes or bacteria to partially digest the dead plant fiber and then feed it to animals,” he adds. Ruminants including cows, sheep, goats, or more likely, faster-reproducing animals like rats, chickens or beetles could do the honors.


Seth Baum
Global Catastrophic Risk Institute

A more exotic solution would be to use bacteria to digest natural gas into sugars, and then eat the bacteria. In fact, a Danish company called Unibio is making animal feed from commercially stranded methane now.

Meanwhile, the U.S. Department of Homeland Security is funding another GCRI project that assesses the risks posed by the arrival of new technologies in synthetic biology or advanced robotics which might be co-opted by terrorists or criminals for use as weapons. “We’re trying to produce forecasts that estimate when these technologies might become available to potential bad actors,” Barrett says.

Focusing on such worst-case scenarios could easily dampen the spirits of GCRI’s researchers. But far from fretting, Baum says that he came to the world of existential risk (or ‘x-risk’) from his interest in the ethics of utilitarianism, which emphasizes actions aimed at maximizing total benefit to people and other sentient beings while minimizing suffering. As an engineering grad student, Baum even had a blog on utilitarianism. “Other people on the blog pointed out how the ethical views I was promoting implied a focus on the big risks,” he recalls. “This logic checked out and I have been involved with x-risks ever since.”

Barrett takes a somewhat more jaundiced view of his chosen career: “Oh yeah, we’re lots of fun at dinner parties…”