CRISPR, Gene Drive Technology, and Hope for the Future

The following article was written by John Min and George Church.

Imagine for a moment, a world where we are able to perform genetic engineering on such large scales as to effectively engineer nature.  In this world, parasites that only cause misery and suffering would not exist, only minimal pesticides and herbicides would be necessary in agriculture, and the environment would be better adapted to maximize positive interactions with all human activities while maintaining sustainability.  While this may all sound like science fiction, the technology that might allow us to reach this utopia is very real, and if we develop it responsibly, this dream may well become reality.

‘Gene drive’ technology, or more specifically, CRISPR gene drives, have been heralded by the press as a potential solution for mosquito-borne diseases such as malaria, dengue, and most recently, Zika. In general, gene drive is a technology that allows scientists to bias the rate of inheritance of specific genes in wild populations of organisms. A gene is said to ‘drive’ when it is able to increase the frequency of its own inheritance higher than the expected probability of 50%. In doing so, gene drive systems exhibit unprecedented ability to directly manipulate genes on a population-wide scale in nature.

The idea to use gene drive systems to propagate engineered genes in natural systems is not new.  Indeed, a proposal to construct gene drives using naturally occurring homing nucleases, genes that can specifically cut DNA and insert extra copies of itself, was published by Austin Burt in 2003 (Burt, 2013). In fact, the concept was discussed even before the earliest studies on naturally driving genetic elements — such as transposons, which are small sections of DNA that can insert extra copies of itself — over half a century ago (Serebrovskii, 1940) (Vanderplank, 1944).

However, it is only with advances in modern genome editing technology, such as CRISPR, that scientists are finally able to digitally target gene drives to any desired location in the genome. Ever since the first CRISPR gene drive design was described in a 2014 publication by Kevin Esvelt and George Church (Esvelt, et al., 2014), man-made gene drive systems have been successfully tested in three separate species, yeast, fruit fly, and mosquitoes (DiCarlo, et al., 2015) (Gantz & Bier, 2015) (Gantz, et al., 2015) .

The term ‘CRISPR’ stands for clustered regularly-interspaced short palindromic repeats and describes an adaptive immune system against viral infections originally discovered in bacteria.  Nucleases, or proteins that cut DNA, in the CRISPR family are generally able to cut DNA anywhere as specified by a short stretch of RNA sequence at high precision and accuracy.

The nuclease cas9, in particular, has become a favorite among geneticists around the world since the publication of a series of high impact journal articles in late 2012 and early 2013 (Jinek, et al., 2012) (Cong, et al., 2013) (Hwang, et al., 2013). Using cas9, scientists are able to create ‘double-stranded breaks,’ or cuts in DNA, at nearly any location specified by a 20 nucleotide piece of RNA sequence.

After being cut, we can take advantage of natural DNA repair mechanisms to persuade cells to incorporate new genetic information into the break. This allows us to introduce new genes into an organism or even bar-code it at a genetic level. By using CRISPR technology, scientists are also able to insert synthesized gene drive systems into a host organism’s genome with the same high level of precision and reliability.

Potential applications for CRISPR gene drives are broad and numerous, as the technology is expected to work in any organism that reproduces sexually.

While popular media attention is chiefly focused on the elimination of mosquito-borne diseases, applications also exist in the fight against the rise of Lyme disease in the U.S. Beyond public health, gene drives can be used to eliminate invasive species from non-native habitats, such as mosquitos in Hawaii. In this case, many native Hawaiian bird species, especially the many honeycreepers, are being driven to extinction by mosquito-borne avian malaria. The removal of mosquitos in Hawaii would both save the  bird populations, as well as make Hawaii even more attractive as a tropical paradise for tourists.

With such rapid expansion of gene drive technology over the past year, it is only natural for there to be some concern and fear over attempting to genetically engineer nature at such a large scale. The only way to truly address these fears is to rigorously test the spreading properties of various gene drive designs within the safety of the laboratory — something that has also been in active development over the last year.

It is also important to remember that mankind has been actively engineering the world around us since the dawn of civilization, albeit with more primitive tools. Using a mixture of breeding and mechanical tools, we have managed to transform teosinte into modern corn, created countless breeds of dogs and cats, and transformed vast stretches everything from lush forests to deserts into modern farmland.

Yet, these amazing feats are not without consequence. Most products of our breeding techniques are unable to survive independently in nature, and countless species have become extinct as the result of our agricultural expansion and eco-engineering.

It is imperative that we approach gene drives differently, with increased consideration for the consequences of our actions on both the natural world as well as ourselves. Proponents of gene drive technology would like to initiate a new research paradigm centered on collective decision making. As most members of the public will inevitably be affected by a gene drive release, it is only ethical to include the public throughout the research and decision making process of gene drive development.  Furthermore, by being transparent and inviting of public criticism, researchers are able to crowd-source the “de-bugging” process, as well as minimize the risk of a gene drive release going awry.

We must come to terms with the reality that thousands of acres of habitat continue to be destroyed annually through a combination of chemical sprays, urban and agricultural expansion, and the introduction of invasive species, just to name a few. To improve up on this, I would like to echo the hopes of my mentor, Kevin Esvelt, toward the use of “more science, and fewer bulldozers for environmental engineering” in hopes of creating a more sustainable co-existence between man and nature. The recent advancements in CRISPR gene drive technology represent an important step toward this hopeful future.

 

About the author: John Min is a PhD. Candidate in the BBS program at Harvard Medical School co-advised by Professor George Church and Professor Kevin Esvelt at MIT Media Labs.  He is currently working on creating a laboratory model for gene drive research.

 

References

Burt, A. (2013). Site-specific selfish genes as tools for the control and genetic engineering of naturl populations. Proceedings of the biological sciences B, 270:921-928.

Cong, L., Ann Ran, F., Cox, D., Lin, S., Barretto, R., Habib, N., . . . Zhang, F. (2013). Multiplex Genome Engineering Using CRISPR/Cas Systems. Science, 819-823.

DiCarlo, J. E., Chavez, A., Dietz, S. L., Esvelt, K. M., & Church, G. M. (2015). RNA-guided gene drives can efficiently and reversibly bias inheritance in wild yeast. bioRxiv preprint, DOI:10.1101/013896.

Esvelt, K. M., Smidler, A. L., Catteruccia, F., & Church, G. M. (2014). Concerning RNA-guided gene drives for the alteration of wild populations. eLIFE, 1-21.

Gantz, V. M., & Bier, E. (2015). The mutagenic chain reaction: A method for converting heterozygous to homozygous mutations. Science, Vol. 348 442-444.

Gantz, V., Jasinskiene, N., Tatarenkova, O., fazekas, A., Macias, V. M., Bier, E., & James, A. A. (2015). Highly efficient Cas90mediated gene drive for population modification of the malaria vector mosquito Anopheles stephensi. PNAS, vol.112 49.

Hwang, W. Y., Fu, Y., Reyon, D., Maeder, M. L., Tsai, S. Q., Sander, J. D., . . . Joung, J. (2013). Efficient genome editing in zebrafish using a CRISPR-Cas system. Nature Biotechnology, 227-229.

Jinek, M., Chylinski, K., Fonfara, I., Hauer, M., Doudna, J. A., & Charpentier, E. (2012). A Programmable Dual-RNA-Guided DNA Endonuclease in Adaptive Bacterial Immunity. Science, 816-821.

Serebrovskii, A. (1940). On the possibility of a new method for the control of insect pests. Zool.Zh.

Vanderplank, F. (1944). Experiments in crossbreeding tsetse flies, Gossina species. Nature, vol.144 607-608.

 

 

X-risk News of the Week: Nuclear Winter and a Government Risk Report

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

The big news this week landed squarely in the x-risk end of the spectrum.

First up was a New York Times op-ed titled, Let’s End the Peril of a Nuclear Winter, and written by climate scientists, Drs. Alan Robock and Owen Brian Toon. In it, they describe the horrors of nuclear winter — the frigid temperatures, the starvation, and the mass deaths — that could terrorize the entire world if even a small nuclear war broke out in one tiny corner of the globe.

Fear of nuclear winter was one of the driving forces that finally led leaders of Russia and the US to agree to reduce their nuclear arsenals, and concerns about nuclear war subsided once the Cold War ended. However, recently, leaders of both countries have sought to strengthen their arsenals, and the threat of a nuclear winter is growing again. While much of the world struggles to combat climate change, the biggest risk could actually be that of plummeting temperatures if a nuclear war were to break out.

In an email to FLI, Robock said:

“Nuclear weapons are the greatest threat that humans pose to humanity.  The current nuclear arsenal can still produce nuclear winter, with temperatures in the summer plummeting below freezing and the entire world facing famine.  Even a ‘small’ nuclear war, using less than 1% of the current arsenal, can produce starvation of a billion people.  We have to solve this problem so that we have the luxury of addressing global warming.

 

Also this week, the Senate Armed Services Committee, led by James Clapper, released the Worldwide Threat Assessment of the US Intelligence Community for 2016. The document is 33 pages of potential problems the government is most concerned about in the coming year, a few of which can fall into the category of existential risks:

  1. The Internet of Things (IoT). Though this doesn’t technically pose an existential risk, it does have the potential to impact quality of life and some of the freedoms we typically take for granted. The report states: “In the future, intelligence services might use the IoT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.”
  2. Artificial Intelligence. Clapper’s concerns are broad in this field. He argues: “Implications of broader AI deployment include increased vulnerability to cyberattack, difficulty in ascertaining attribution, facilitation of advances in foreign weapon and intelligence systems, the risk of accidents and related liability issues, and unemployment. […] The increased reliance on AI for autonomous decision making is creating new vulnerabilities to cyberattacks and influence operations. […] AI systems are susceptible to a range of disruptive and deceptive tactics that might be difficult to anticipate or quickly understand. Efforts to mislead or compromise automated systems might create or enable further opportunities to disrupt or damage critical infrastructure or national security networks.”
  3. Nuclear. Under the category of Weapons of Mass Destruction (WMD), Clapper dedicated the most space to concerns about North Korea’s nuclear weapons. However he also highlighted concerns about China’s work to modernize its nuclear weapons, and he argues that Russia violated the INF Treaty when they developed a ground-launch cruise missile.
  4. Genome Editing. Interestingly, gene editing was also listed in the WMD category. As Clapper explains, “Research in genome editing conducted by countries with different regulatory or ethical standards than those of Western countries probably increases the risk of the creation of potentially harmful biological agents or products.” Though he doesn’t explicitly refer to the CRISPR-Cas9 system, he does worry that the low cost and ease-of-use for new technologies will enable “deliberate or unintentional misuse” that could “lead to far reaching economic and national security implications.”

The report, though long, is an easy read, and it’s always worthwhile to understand what issues are motivating the government’s actions.

 

With our new series by Matt Scherer about the legal complications of some of the anticipated AI and autonomous weapons developments, the big news should have been about all of the headlines this week that claimed the federal government now considers AI drivers to be real drivers. Scherer, however, argues this is bad journalism. He provides his interpretation of the NHTSA letter in his recent blog post, “No, the NHTSA did not declare that AIs are legal drivers.”

 

While the headlines of the last few days may have veered toward x-risk, this week also marks the start of the 30th annual Association for the Advancement of Artificial Intelligence (AAAI) Conference. For almost a week, AI researchers will convene in Phoenix to discuss their developments and breakthroughs, and on Saturday, FLI grantees will present some of their research at the AI Ethics and Society Workshop. This is expected to be an event full of hope and excitement about the future!

 

X-risk News of the Week: Human Embryo Gene Editing

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

If you keep up with science news at all, then you saw the headlines splashed all over news sources on Monday: The UK has given researchers at the Francis Crick Institute permission to edit the genes of early-stage human embryos.

This is huge news, not only in genetics and biology fields, but for science as a whole. No other researcher has ever been granted permission to perform gene editing on viable human embryos before.

The usual fears of designer babies and slippery slopes popped up, but as most of the general news sources reported, those fears are relatively unwarranted for this research. In fact, this project, with is led by Dr. Kathy Niakan, could arguably be closer to the existential hope side of the spectrum.

Niakan’s objective is to try to understand the first seven days of embryo development, and she’ll do so by using CRISPR to systematically sweep through genes in embryos that were donated from in vitro fertilization (IVF) procedures. While research in mice and other animals has given researchers an idea of the roles different genes play at those early stages of development, there many genes that are uniquely human and can’t be studied in other animals. Many causes of infertility and miscarriages are thought to occur in some of those genes during those very early stages of development, but we can only determine that through this kind of research.

Niakan explained to the BBC, “We would really like to understand the genes needed for a human embryo to develop successfully into a healthy baby. The reason why it is so important is because miscarriages and infertility are extremely common, but they’re not very well understood.”

It may be hard to see how preventing miscarriages could be bad, but this is a controversial research technique under normal circumstances, and Niakan’s request for approval came on the heels of human embryo research that did upset the world.

Last year, outrage swept through the scientific community after scientists in China chose to skip proper approval processes to perform gene-editing research on nonviable human embryos. Many prominent scientists in the field, including FLI’s Scientific Advisory Board Member George Church, responded by calling for a temporary moratorium on using the CRISPR/Cas-9 gene-editing tool in human embryos that would be carried to term.

An important distinction to make here is that Dr. Niakan went through all of the proper approval channels to start her research. Though the UK’s approval process isn’t quite as stringent as that in the US – which prohibits all research on viable embryos – the Human Fertilisation and Embryology Authority, which is the approving body, is still quite strict, insisting, among other things, that the embryos be destroyed after 14 days to ensure they can’t ever be taken to term. The team will also only use embryos that were donated with full consent by the IVF patients

Max Schubert, a doctoral candidate of Dr. George Church’s lab at Harvard, explained that one of the reasons for the temporary moratorium was to give researchers time to study the effects of CRISPR first to understand how effective and safe it truly is. “I think [Niakan’s research] represents the kind of work that you need to do to understand the risks that those scientists are concerned about,” said Schubert.

John Min, also a PhD candidate in Dr. Church’s lab, pointed out that the knowledge we could gain from this research will very likely lead to medications and drugs that can be used to help prevent miscarriages, and that the final treatment could very possibly not involve any type of gene editing at all. This would eliminate, or at least limit, concerns about genetically modified humans.

Said Min, “This is a case that illustrates really well the potential of CRISPR technology … CRISPR will give us the answers to [Niakan’s] questions much more cheaply and much faster than any other existing technology.”

An Explosion of CRISPR Developments in Just Two Months

 

A Battle Is Waged

A battle over CRISPR is raging through the halls of justice. Almost literally. Two of the key players in the development of the CRISPR technology, Jennifer Doudna and Feng Zhang, have turned to the court system to determine which of them should receive patents for the discovery of the technology. The fight went public in January and was amplified by the release of an article in Cell that many argued presented a one-sided version of the history of CRISPR research. Yet, among CRISPR’s most amazing feats is not its history, but how rapidly progress in the field is accelerating.

Justice_white_background

A CRISPR Explosion

CRISPR, which stands for clustered regularly-interspaced short palindromic repeats, is DNA used in the immune systems of prokaryotes. The system relies on the Cas9 enzyme* and guide RNA’s to find specific, problematic segments of a gene and cut them out. Just three years ago, researchers discovered that this same technique could be applied to humans. As the accuracy, efficiency, and cost-effectiveness of the system became more and more apparent, researchers and pharmaceutical companies jumped on the technique, modifying it, improving it, and testing it on different genetic issues.

Then, in 2015, CRISPR really exploded onto the scene, earning recognition as the top scientific breakthrough of the year by Science Magazine. But not only is the technology not slowing down, it appears to be speeding up. In just two months — from mid-November, 2015 to mid-January, 2016 — ten major CRISPR developments (including the patent war) have grabbed headlines. More importantly, each of these developments could play a crucial role in steering the course of genetics research.

 

Malaria


mosquito_white_background

CRISPR made big headlines in late November of 2015, when researchers announced they could possibly eliminate malaria using the gene-editing technique to start a gene drive in mosquitos. A gene drive occurs when a preferred version of a gene replaces the unwanted version in every case of reproduction, overriding Mendelian genetics, which say that each two representations of a gene should have an equal chance of being passed on to the next generation. Gene drives had long been a theory, but there was no way to practically apply the theory. Then, along came CRISPR. With this new technology, researchers at UC campuses in Irvine and San Diego were able to create an effective gene drive against malaria in mosquitos in their labs. Because mosquitos are known to transmit malaria, a gene drive in the wild could potentially eradicate the disease very quickly. More research is necessary, though, to ensure effectiveness of the technique and to try to prevent any unanticipated negative effects that could occur if we permanently alter the genes of a species.

 

Muscular Dystrophy

A few weeks later, just as 2015 was coming to an end, the New York Times reported that three different groups of researchers announced they’d successfully used CRISPR in mice to treat Duchenne muscular dystrophy (DMD), which, though rare, is among the most common fatal genetic diseases. With DMD, boys have a gene mutation that prevents the creation of a specific protein necessary to keep muscles from deteriorating. Patients are typically in wheel chairs by the time they’re ten, and they rarely live past their twenties due to heart failure. Scientists have often hoped this disease was one that would be well suited for gene therapy, but locating and removing the problematic DNA has proven difficult. In a new effort, researchers loaded CRISPR onto a harmless virus and either injected it into the mouse fetus or the diseased mice to remove the mutated section of the gene. While the DMD mice didn’t achieve the same levels of muscle mass seen in the control mice, they still showed significant improvement.

Writing for Gizmodo, George Dvorsky said, “For the first time ever, scientists have used the CRISPR gene-editing tool to successfully treat a genetic muscle disorder in a living adult mammal. It’s a promising medical breakthrough that could soon lead to human therapies.”

 

Blindness

Only a few days after the DMD story broke, researchers from the Cedars-Sinai Board of Governors Regenerative Medicine Institute announced progress they’d made treating retinitis pigmentosa, an inherited retinal degenerative disease that causes blindness. Using the CRISPR technology on affected rats, the researchers were able to clip the problematic gene, which, according to the abstract in Molecular Therapy, “prevented retinal degeneration and improved visual function.” As Shaomei Wang, one of the scientists involved in the project, explained in the press release, “Our data show that with further development, it may be possible to use this gene-editing technique to treat inherited retinitis pigmentosa in patients.” This is an important step toward using CRISPR  in people, and it follows soon on the heels of news that came out in November from the biotech startup, Editas Medicine, which hopes to use CRISPR in people by 2017 to treat another rare genetic condition, Leber congenital amaurosis, that also causes blindness.

 

Gene Control

January saw another major development as scientists announced that they’d moved beyond using CRISPR to edit genes and were now using the technique to control genes. In this case, the Cas9 enzyme is essentially dead, such that, rather than clipping the gene, it acts as a transport for other molecules that can manipulate the gene in question. This progress was written up in The Atlantic, which explained: “Now, instead of a precise and versatile set of scissors, which can cut any gene you want, you have a precise and versatile delivery system, which can control any gene you want. You don’t just have an editor. You have a stimulant, a muzzle, a dimmer switch, a tracker.” There are countless benefits this could have, from boosting immunity to improving heart muscles after a heart attack. Or perhaps we could finally cure cancer. What better solution to a cell that’s reproducing uncontrollably than a system that can just turn it off?

 

CRISPR Control or Researcher Control

But just how much control do we really have over the CRISPR-Cas9 system once it’s been released into a body? Or, for that matter, how much control do we have over scientists who might want to wield this new power to create the ever-terrifying “designer baby”?

robot_gene_editing

The short answer to the first question is: There will always be risks. But not only is CRISPR-Cas9 incredibly accurate, scientists didn’t accept that as good enough, and they’ve been making it even more accurate. In December, researchers at the Broad Institute published the results of their successful attempt to tweak the RNA guides: they had decreased the likelihood of a mismatch between the gene that the RNA was supposed to guide to and the gene that it actually did guide to. Then, a month later, Nature published research out of Duke University, where scientists had tweaked another section of the Cas9 enzyme, making its cuts even more precise. And this is just a start. Researchers recognize that to successfully use CRISPR-Cas9 in people, it will have to be practically perfect every time.

But that raises the second question: Can we trust all scientists to do what’s right? Unfortunately, this question was asked in response to research out of China in April, in which scientists used CRISPR to attempt to genetically modify non-viable human embryos. While the results proved that we still have a long way to go before the technology will be ready for real human testing, the fact that the research was done at all raised red-flags and shackles among genetics researchers and the press. These questions may have popped up back in March and April of 2015, but the official response came at the start of December when geneticists, biologists and doctors from around the world convened in Washington D. C. for the International Summit on Human Gene Editing. Ultimately, though, the results of the summit were vague, essentially encouraging scientists to proceed with caution, but without any outright bans. However, at this stage of research, the benefits of CRISPR likely outweigh the risks.

 

Big Pharma


biotech_big_pharma

“Proceed with caution” might be just the right advice for pharmaceutical companies that have jumped on the CRISPR bandwagon. With so many amazing possibilities to improve human health, it comes as no surprise that companies are betting, er, investing big money into CRISPR. Hundreds of millions of dollars flooded the biomedical start-up industry throughout 2015, with most going to two main players, Editas Medicine and Intellia Therapeutics. Then, in the middle of December, Bayer announced a joint venture with CRISPR Therapeutics to the tune of $300 million. That’s three major pharmaceutical players hoping to win big with a CRISPR gamble. But just how big of a gamble can such an impressive technology be? Well, every company is required to license the patent for a fee, but right now, because of the legal battles surrounding CRISPR, the original patents (which the companies have already licensed) have been put on hold while the courts try to figure out who is really entitled to them. If the patents change ownership, that could be a big game-changer for all of the biotech companies that have invested in CRISPR.

 

Upcoming Concerns?

On January 14, a British court began reviewing a request by the Frances Crick Institute (FCI) to begin genetically modified research on human embryos. While Britain’s requirements on human embryo testing are more lax than the U.S. — which has a complete ban on genetically modifying any human embryos — the British are still strict, requiring that the embryo be destroyed after the 14th day. The FCI requested a license to begin research on day-old, “spare” IVF embryos to develop a better understanding of why some embryos die at early stages in the womb, in an attempt to decrease the number of miscarriages women have. This germ-line editing research is, of course, now possible because of the recent CRISPR breakthroughs. If this research is successful, The Independent argues, “it could lead to pressure to change the existing law to allow so-called “germ-line” editing of embryos and the birth of GM children.” However, Dr. Kathy Niacin, the lead researcher on the project, insists this will not create a slippery slope to “designer babies.” As she explained to the Independent, ““Because in the UK there are very tight regulations in this area, it would be completely illegal to move in that direction. Our research is in line with what is allowed an in-keeping in the UK since 2009 which is purely for research purposes.”

Woolly Mammoths

Woolly Mammoths! What better way to end an article about how CRISPR can help humanity than with the news that it can also help bring back species that have gone extinct? Ok. Admittedly, the news that George Church wants to resurrect the woolly mammoth has been around since last spring. But the Huffington Post did a feature about his work in December, and it turns out his research has advanced enough now that he predicts the woolly mammoth could return in as little as seven years. Though this won’t be a true woolly mammoth. In fact, it will actually be an Asian elephant boosted by woolly mammoth DNA. Among the goals of the project is to help prevent the extinction of the Asian elephant, and woolly mammoth DNA could help achieve that. The idea is that a hybrid elephant would be able to survive more successfully as the climate changes. If this works, the method could be applied to other plants and animal species to increase stability and decrease extinction rates. As Church tells Huffington Post, “the fact is we’re not bringing back species — [we’re] strengthening existing species.”

woolly_mammoths

And what more could we ask of genetics research than to strengthen a species?

*Cas9 is only one of the enzymes that can work with the CRISPR system, but researchers have found it to be the most accurate and efficient.

The Wisdom Race Is Heating Up

There’s a race going on that will determine the fate of humanity. Just as it’s easy to miss the forest for all the trees, however, it’s easy to miss this race for all the scientific news stories about breakthroughs and concerns. What do all these headlines from 2015 have in common?

“AI masters 49 Atari games without instructions”
“Self-driving car saves life in Seattle”
“Pentagon Seeks $12Bn for AI Weapons”
“Chinese Team Reports Gene-Editing Human Embryos”
“Russia building Dr. Strangelove’s Cobalt bomb”

They are all manifestations of the aforementioned race heating up: the race between the growing power of technology and the growing wisdom with which we manage it. The power is growing because our human minds have an amazing ability to understand the world and to convert this understanding into game-changing technology. Technological progress is accelerating for the simple reason that breakthroughs enable other breakthroughs: as technology gets twice as powerful, if can often be used to used to design and build technology that is twice as powerful in turn, triggering repeated capability doubling in the spirit of Moore’s law.

What about the wisdom ensuring that our technology is beneficial? We have technology to thank for all the ways in which today is better than the Stone Age, but this not only thanks to the technology itself but also thanks to the wisdom with which we use it. Our traditional strategy for developing such wisdom has been learning from mistakes: We invented fire, then realized the wisdom of having fire alarms and fire extinguishers. We invented the automobile, then realized the wisdom of having driving schools, seat belts and airbags.

In other words, it was OK for wisdom to sometimes lag behind in the race, because it would catch up when needed. With more powerful technologies such as nuclear weapons, synthetic biology and future strong artificial intelligence, however, learning from mistakes is not a desirable strategy: we want to develop our wisdom in advance so that we can get things right the first time, because that might be the only time we’ll have. In other words, we need to change our approach to tech risk from reactive to proactive. Wisdom needs to progress faster.

This year’s Edge Question “What is the most interesting recent news and what makes it important?” is cleverly ambiguous, and can be interpreted either as call to pick a news item or as asking about the very definition of “interesting and important news.” If we define “interesting” in terms of clicks and Nielsen ratings, then top candidates must involve sudden change of some sort, whether it be a discovery or a disaster. If we instead define “interesting” in terms of importance for the future of humanity, then our top list should include even developments too slow to meet journalist’s definition of “news,” such as “Globe keeps warming.” In that case, I’ll put the fact that the wisdom race is heating up at the very top of my list. Why?

From my perspective as a cosmologist, something remarkable has just happened: after 13.8 billion years, our universe has finally awoken, with small parts of it becoming self-aware, marveling at the beauty around them, and beginning to decipher how their universe works. We, these self-aware life forms, are using our new-found knowledge to build technology and modify our universe on ever grander scales.

This is one of those stories where we get to pick our own ending, and there are two obvious ones for humanity to choose between: either win the wisdom race and enable life to flourish for billions of years, or lose the race and go extinct. To me, the most important scientific news is that after 13.8 billion years, we finally get to decide—probably within centuries or even decades.

Since the decision about whether to win the race sounds like such a no-brainer, why are we still struggling with it? Why is our wisdom for managing technology so limited that we didn’t do more about climate change earlier, and have come close to accidental nuclear war over a dozen times? As Skype-founder Jaan Tallinn likes to point out, it is because our incentives drove us to a bad Nash equilibrium. Many of humanity’s most stubborn problems, from destructive infighting to deforestation, overfishing and global warming, have this same root cause: when everybody follows the incentives they are given, it results in a worse situation than cooperation would have enabled.

Understanding this problem is the first step toward solving it. The wisdom we need to avoid lousy Nash equilibria must be developed at least in part by the social sciences, to help create a society where individual incentives are aligned with the welfare of humanity as a whole, encouraging collaboration for the greater good. Evolution endowed us with compassion and other traits to foster collaboration, and when more complex technology made these evolved traits inadequate, our forebears developed peer pressure, laws and economic systems to steer their societies toward good Nash equilibria. As technology gets ever more powerful, we need ever stronger incentives for those who develop, control and use it to make its beneficial use their top priority.

Although the social sciences can help, plenty of technical work is needed as well in order to win the race. Biologists are now studying how to best deploy (or not) tools such as CRISPR genome editing. 2015 will be remembered as the year when the beneficial AI movement went mainstream, engendering productive symposia and discussions at all the largest AI-conferences. Supported by many millions of dollars in philanthropic funding, large numbers of AI-researchers around the world have now started researching the fascinating technical challenges involved in keeping future AI-systems beneficial. In other words, the laggard in the all-important wisdom race gained significant momentum in 2015! Let’s do all we can to make future top news stories be about wisdom winning the race, because then we all win.

This article was originally posted on Edge.org in response to the question: “What do you consider the most interesting recent [scientific] news? What makes it important?”

2015: An Amazing Year in Review

Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…

 

In the beginning

conference150104

Participants and attendees of the inaugural Puerto Rico conference.

2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.

At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.

 

Forging ahead

On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.

On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.

SF_event

All five FLI-founders flanked by other beneficial-AI enthusiasts. From left to right, top to bottom: Stuart Russell. Jaan Tallinn, Janos Kramar, Anthony Aguirre, Max Tegmark, Nick Bostrom, Murray Shanahan, Jesse Galef, Michael Vassar, Nate Soares, Viktoriya Krakovna, Meia Chita-Tegmark and Katja Grace

Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.

This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.

Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.

 

Other major events

Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.

Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.

In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).

military_AI

Concerns about autonomous weapons led to an open letter calling for a ban.

Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.

Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.

September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.

Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.

Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.

Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events.  The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.

 

NIPS-panel-3

Richard Mallah speaking at the third panel discussion of the NIPS symposium.

In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.

 

In the Press

We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:

The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent,  The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

You can find more media coverage of Elon Musk’s donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.

Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.

Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.

Throughout the year, Victoria was interviewed by Popular Science, Engineering and Technology Magazine, Boston Magazine and Blog Talk Radio.

Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.

In addition we had a few extra special articles on our new website:

Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove

On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.

 

A New Beginning

2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!

Happy New Year!

happy_new_year_2016

CRISPR to Be Used on People by 2017

We recently posted an article about what the CRISPR gene-editing technology is and why it’s been in the news lately, but there’s more big news to follow up with.

Highlights from MIT Technology Review article:

While there has been much ado about how easy and effective CRISPR is in animals like mice, Katrine Bosley, CEO of the biotech startup, Editas Medicine, has announced plans to use the gene-editing technology on people by 2017. Their goal is to use CRISPR  to help bring sight back to people who suffer from a very rare disease, known as Leber congenital amaurosis. Only about 600 people in US have the condition. Researchers know exactly which gene causes the disease, and because of it’s location in the eye, “doctors can inject treatment directly under the retina.”

Antonio Regalado, author of the article, writes: “Editas picked the disease in part because it is relatively easy to address with CRISPR, Bosley said. The exact gene error is known, and the eye is easy to reach with genetic treatments. ‘It feels fast, but we are going at the pace science allows,’ she said. There are still questions about how well gene-editing will work in the retina and whether side effects could be caused by unintentional changes to DNA.”

Editas will continue research in the lab and on animals before they attempt research on humans.

Read the full story here.

The Rise and Ethics of CRISPR

CRISPR.

The acronym is short for “clustered regularly interspaced short palindromic repeats,” which describes the structure of a specific type of gene sequence. CRISPR also represents a fast-growing, gene-editing technology that could change the way we approach disease, farming and countless other fields related to genetics and biology. CRISPR researchers believe the process can be used to cure cancer, end malaria, eliminate harmful mutations, stem crop blights, and similar monumental feats.

The technology has been in and out of the news over the last few years, but in just the last week, it’s a topic that’s been covered by the New Yorker, the New York Times, Popular Science, Nature, the Washington Post, and even the Motely Fool.

Why is CRISPR grabbing the spotlight now?

The short answer is money. According to the Washington Post, venture capitalists have invested over $200 million into CRISPR technology in the last nine months alone. Meanwhile, the Motely Fool highlights the recent $105 million investment that Vertex Pharmaceuticals just put into CRISPR Therapeutics, which could ultimately be valued at $2.6 billion. Even a small biohacker crowdfunding project that sells CRISPR kits is showing signs of success.

But what is CRISPR and why is the pharmaceutical industry so interested?

DNA_splicing_scissors

CRISPR represents a cluster of DNA sequences that can identify and eliminate a specified gene out of another target DNA sequence and repair the targeted DNA as if nothing had been removed. Until CRISPR, gene editing was an arduous, time-consuming task that could take months or years. With the development of CRISPR technologies, these processes are much easier to perform and much faster — a matter of seconds, in some cases. Researchers have successfully used CRISPR to eliminate various diseases, such as sickle-cell anemia and muscular dystrophy, from animal genomes, which has naturally piqued interest in the pharmaceutical industry.

“Yet not since J. Robert Oppenheimer realized that the atomic bomb he built to protect the world might actually destroy it have the scientists responsible for a discovery been so leery of using it,” says the New Yorker journalist, Michael Specter.

The Ethics of CRISPR

As with any gene-editing technology, many researchers fear CRISPR almost as much as they admire it. Genetic engineering has had opponents for decades, as people worried about designer babies and cloned humans. Testing on human embryos is still a major concern, but as the possibility of forever eliminating genetic diseases grows, scientists must also consider ethical questions about which diseases are truly harmful (such as sickle-cell anemia) and which conditions merely represent the variety of humanity (such as Aspergers or deafness).

Then there are the risks of irreversibly altering a gene sequence and only later learning it was necessary. Though George Church, FLI Science Advisory Board member, told the New Yorker, “There are tons of technologies that are irreversible. But genetics is not one of them. In my lab, we make mutations all the time and then we change them back. Eleven generations from now, if we alter something and it doesn’t work properly we will simply fix it.”

Nonetheless, geneticists are taking action to ensure the technology remains safe. At the start of December, the National Academies of Science will host a three-day international summit with the Chinese Academy of Sciences and the U.K.’s Royal Society to discuss the ethical future of such gene-editing technologies as CRISPR. Experts from around the world will convene on Washington D.C. to address these issues.

Nature has also posted an article recommending four actions that researchers can take to keep this type of gene-editing technology safe:

  1. “Establish a model regulatory framework that could be adopted internationally.”
  2. “Develop a road map for basic research.”
  3. “Engage people from all sectors of society in a debate about genome editing, including the use of human embryos in this research.”
  4. “Design tools and methods to enable inclusive and meaningful deliberation.”

The articles in the New Yorker, the New York Times and the Washington Post all provide excellent information for anyone interested in learning more about what CRISPR is, its risks and possibilities, and the researchers behind the science.

 

From Global News Canada: Former Greenpeace president supports biotechnology

“Patrick Moore says biotechnology is one of the reasons farmers in Western Canada can feed more than a hundred people from a single farm. The former president of Greenpeace Canada says it’s one of the reasons he supports biotechnology, along with the use of pesticides and machinery in producing crops.

 

“Less than 100 years ago it took about 75 per cent of the population to grow the food for a country, and that’s still true in some African and Asian countries,” Moore told Global News.

“But here we’re growing enough food for the whole population and exporting a great deal at the same time with two to three per cent of the population. One Saskatchewan farmer is feeding 155 people today because of science and technology,” said Moore.”

Read the full article.

The Power to Remake a Species

 


Once started, a carefully implemented gene drive could
eradicate the entire malaria-causing
Anopheles species of mosquito.

In 2013, some 200 million humans suffered from malaria, and an estimated 584,000 of them died, 90 percent in Africa. The vast majority of those killed were children under age 5. Decades of research have fallen short of a vaccine for this scourge. A powerful new technique that allows scientists to selectively edit entire genomes could provide a solution, but it also poses risks—and ethical questions science is only beginning to address.

The technique relies on a tool called a gene drive, something scientists have discussed since 2003 but which has only recently become possible. A gene drive greatly increases the odds that a particular gene will be inherited by all future generations. Genes occasionally evolve the ability naturally, but if we could engineer it deliberately, small interventions could have enormous impact, giving scientists the power to eradicate diseases, remove invasive species, and wholly remake the natural landscape.

One proposed use of a gene drive would alter the genetic code of a few mosquitoes that carry the malaria parasite, ensuring that the ‘Y’ chromosome would always be passed on. The result is a male-only line that systematically topples the population’s gender balance. Once started, a carefully implemented gene drive could eradicate the entire malaria-causing Anopheles species.

“Its advantage over vaccines is that you don’t have to go out and inject every person at risk,” says George Church, a geneticist at the Wyss Institute at Harvard Medical School. “You simply have to introduce a small number of mosquitoes into the wild, and they do all the work. They become your foot soldiers, or your cadre of nurses.”

The question becomes ‘Should we?’ rather than ‘Can we?’
To what extent do scientists have the right to work on
problems where, if they screw up, it could affect us all?”
– Kevin Esvelt

But because gene drives spread the adaptation throughout an entire population, some scientists are concerned that the technology is advancing before we have a conversation about the best ways to use it wisely – and safely.

“Of all the species that cause human suffering, the malarial mosquito is arguably number one,” says Kevin Esvelt, a researcher at the Wyss Institute. “If a gene drive would allow us to eradicate malaria the way we eradicated smallpox, that’s a possibility we at least need to consider. At the same time, this raises questions of, who gets to decide? Given the urgency of problems like malaria, we should probably be talking about it now.”

The Machinery of Gene Drives


George Church, Wyss Institute
Harvard Medical School

Interest in gene drives’ potential has intensified since 2012, when scientists developed the gene-editing technique known as CRISPR (for DNA sequences called clustered regularly interspaced short palindromic repeats). Derived from a bacterial defense strategy, CRISPR is a search, cut-and-paste system that works in any cell. It uses an enzyme to home in on a specific nucleotide sequence, slice it, and replace it with others of the scientists’ choosing. CRISPR is cheap and precise, making gene drives viable.

In normal sexual reproduction, offspring inherit a random half of genes from each parent. By encoding the CRISPR editing machinery in a genome along with whatever new trait you’d want to include, you would ensure that any offspring not only have the new mutation, but the tools to give that same trait to the next generation, and so on. The gene then drives through an entire population exponentially.

“It would be as if in your family, all of your daughters and sons insisted that all of their daughters and sons would have the same last name. Then your name would spread throughout the population,” Church says.

Mosquitoes reproduce quickly, making them an ideal target for CRISPR modification, Esvelt says. If the mutation reduced mosquitoes’ offspring or rendered males sterile, the population could be wiped out in a single season—along with the parasite that causes malaria.

This would require more work to isolate the genes involved, however. The technique also needs to become more efficient, Church says. The enzyme that hunts down target DNA sequences sometimes misses its mark, which could introduce unintended—and harmful—changes in the genome that spread throughout the species.

Genetic Upgrades or Risks

These ethical and safety concerns came into stark relief in April, when Chinese scientists reported editing the genomes of human embryos. The embryos were already non-viable, meaning they would not have resulted in live birth. CRISPR-mediated changes to their chromosomes were unsuccessful, and resulted in several off-target mutations, according to the researchers, led by Junjiu Huang at Sun Yat-sen University in Guangzhou, China. The paper set off a firestorm of controversy and calls for a moratorium on such research, including from the National Academies and the White House Office of Science and Technology Policy.


As we develop the power to remake a species, the question
becomes how to best use it without causing a cascade
of unintended consequences.

“We should be very concerned about the prospect of using these gene editing techniques for altering traits that are passed on,” says Marcy Darnovsky, executive director of the Center for Genetics and Society, a California-based nonprofit. “If you think about how that could—or perhaps would—likely play out in the social, political, and commercial environment in which we all live, it’s easy to see how you could get into hot water pretty quickly. You could have wealthy people purchasing genetic upgrades for their children, for instance. It sounds like science fiction.”

Even if gene drives are only used in pests, ethical questions still loom large. Eliminating a whole mosquito species could make way for new pests, or disrupt predators who feast on the insects, Esvelt says. And there could be human consequences, too. David Gurwitz, a neuroscientist at Tel Aviv University in Israel, wrote in an August 2014 letter to the journal Science that gene drives could also be used for nefarious purposes.

“Just as gene drives can make mosquitoes unfit for hosting and spreading the malaria parasite, they could conceivably be designed with gene drives carrying cargo for delivering lethal bacteria toxins to humans. Other scary scenarios, such as targeted attacks on major crop plants, could also be envisaged,” he wrote. He called for elements of CRISPR editing techniques to stay out of the scientific literature. In an email, he said he is “amazed at the lack of public discussion” to date on gene drive use.

Offense vs. Defense


Kevin Esvelt,
Wyss Institute
Harvard Medical School

In part because it is so inexpensive—the reagents and plasmid DNA used in CRISPR modification can be had for under $100, Church says—the method has spread through labs around the world like wildfire. “It’s hard to keep people from doing things that are simple and cheap,” he says. It has been shown to work in at least 30 organisms, according to Esvelt. As CRISPR use becomes more common, Esvelt, Church and their colleagues have aimed to develop ways to ensure its safety.

If one modified organism escapes from a lab and is able to breed with a wild relative, its altered gene would quickly spread through the entire population, making containment especially important. In April 2014, Esvelt, Church and a team of scientists published a commentary in Science suggesting methods for preventing accidental gene drive releases, such as conducting experiments on malarial mosquitoes in climates where no Anopheles relatives live. Esvelt also suggests CRISPR itself could be used to reverse an accidental release, by simply undoing the edit.

In April 2015, Church and colleagues and a separate team led by Alexis Rovner and Farren Isaccs of Yale University reported two new ways to generate modified organisms that could never survive outside a lab. Both approaches make the altered organism dependent on an unnatural amino acid they could never obtain in the wild.

If one modified organism escapes from a lab
and is able to breed with a wild relative, its
altered gene would quickly spread through the
entire population, making containment
especially important.

Turning this concept inside out could yield engineered pests or weeds that succumb to natural substances that don’t harm anything else, Esvelt says. Instead of modifying crops to resist a broad-spectrum herbicide, for instance, gene drives could modify the weeds themselves: “You could create a vulnerability that did not previously exist to a compound that would not harm any other living thing.”

But any containment methods would have to follow the law, which remains murky, he says. Absent a national policy—which Darnovsky says should come from Congress—scientists should be talking about how, and when, CRISPR should be used.

“The question becomes ‘Should we?’ rather than ‘Can we?’” Esvelt says. “To what extent do scientists have the right to work on problems where, if they screw up, it could affect us all?”

Darnovsky, who notes that scientists have only just begun to understand the machinery of life as it evolved over millions of years, argues that scientists should not monopolize discussions about the use of CRISPR.

“We need to develop habits of mind, or habits of social interaction, that will allow for some very robust public participation on the use of these very powerful technologies,” she says. “It’s the future of life. It’s an issue that affects everybody.”

FHI: Putting Odds on Humanity’s Extinction

Putting Odds on Humanity’s Extinction
The Team Tasked With Predicting-and Preventing-Catastrophe
by Carinne Piekema
May 13, 2015

Bookmark and Share

Not long ago, I drove off in my car to visit a friend in a rustic village in the English countryside. I didn’t exactly know where to go, but I figured it didn’t matter because I had my navigator at the ready. Unfortunately for me, as I got closer, the GPS signal became increasingly weak and eventually disappeared. I drove around aimlessly for a while without a paper map, cursing my dependence on modern technology.


It may seem gloomy to be faced with a graph that predicts the
potential for extinction, but the FHI researchers believe it can
stimulate people to start thinking—and take action.

But as technology advances over the coming years, the consequences of it failing could be far more troubling than getting lost. Those concerns keep the researchers at the Future of Humanity Institute (FHI) in Oxford occupied—and the stakes are high. In fact, visitors glancing at the white boards surrounding the FHI meeting area would be confronted by a graph estimating the likelihood that humanity dies out within the next 100 years. Members of the Institute have marked their personal predictions, from some optimistic to some seriously pessimistic views estimating as high as a 40% chance of extinction. It’s not just the FHI members: at a conference held in Oxford some years back, a group of risk researchers from across the globe suggested the likelihood of such an event is 19%. “This is obviously disturbing, but it still means that there would be 81% chance of it not happening,” says Professor Nick Bostrom, the Institute’s director.

That hope—and challenge—drove Bostrom to establish the FHI in 2005. The Institute is devoted precisely to considering the unintended risks our technological progress could pose to our existence. The scenarios are complex and require forays into a range of subjects including physics, biology, engineering, and philosophy. “Trying to put all of that together with a detailed attempt to understand the capabilities of what a more mature technology would unleash—and performing ethical analysis on that—seemed like a very useful thing to do,” says Bostrom.

Far from being bystanders in the face
of apocalypse, the FHI researchers are
working hard to find solutions.

In that view, Bostrom found an ally in British-born technology consultant and author James Martin. In 2004, Martin had donated approximately $90 million US dollars—one of the biggest single donations ever made to the University of Oxford—to set up the Oxford Martin School. The school’s founding aim was to address the biggest questions of the 21st Century, and Bostrom’s vision certainly qualified. The FHI became part of the Oxford Martin School.

Before the FHI came into existence, not much had been done on an organised scale to consider where our rapid technological progress might lead us. Bostrom and his team had to cover a lot of ground. “Sometimes when you are in a field where there is as yet no scientific discipline, you are in a pre-paradigm phase: trying to work out what the right questions are and how you can break down big, confused problems into smaller sub-problems that you can then do actual research on,” says Bostrom.

Though the challenge might seem like a daunting task, researchers at the Institute have a host of strategies to choose from. “We have mathematicians, philosophers, and scientists working closely together,” says Bostrom. “Whereas a lot of scientists have kind of only one methodology they use, we find ourselves often forced to grasp around in the toolbox to see if there is some particular tool that is useful for the particular question we are interested in,” he adds. The diverse demands on their team enable the researchers to move beyond “armchair philosophising”—which they admit is still part of the process—and also incorporate mathematical modelling, statistics, history, and even engineering into their work.

“We can’t just muddle through and learn
from experience and adapt. We have to
anticipate and avoid existential risk.
We only have one chance.”
– Nick Bostrom

Their multidisciplinary approach turns out to be incredibly powerful in the quest to identify the biggest threats to human civilisation. As Dr. Anders Sandberg, a computational neuroscientist and one of the senior researchers at the FHI explains: “If you are, for instance, trying to understand what the economic effects of machine intelligence might be, you can analyse this using standard economics, philosophical arguments, and historical arguments. When they all point roughly in the same direction, we have reason to think that that is robust enough.”

The end of humanity?

Using these multidisciplinary methods, FHI researchers are finding that the biggest threats to humanity do not, as many might expect, come from disasters such as super volcanoes, devastating meteor collisions or even climate change. It’s much more likely that the end of humanity will follow as an unintended consequence of our pursuit of ever more advanced technologies. The more powerful technology gets, the more devastating it becomes if we lose control of it, especially if the technology can be weaponized. One specific area Bostrom says deserves more attention is that of artificial intelligence. We don’t know what will happen as we develop machine intelligence that rivals—and eventually surpasses—our own, but the impact will almost certainly be enormous. “You can think about how the rise of our species has impacted other species that existed before—like the Neanderthals—and you realise that intelligence is a very powerful thing,” cautions Bostrom. “Creating something that is more powerful than the human species just seems like the kind of thing to be careful about.”


Nick Bostrom, Future of Humanity Institute Director

Far from being bystanders in the face of apocalypse, the FHI researchers are working hard to find solutions. “With machine intelligence, for instance, we can do some of the foundational work now in order to reduce the amount of work that remains to be done after the particular architecture for the first AI comes into view,” says Bostrom. He adds that we can indirectly improve our chances by creating collective wisdom and global access to information to allow societies to more rapidly identify potentially harmful new technological advances. And we can do more: “There might be ways to enhance biological cognition with genetic engineering that could make it such that if AI is invented by the end of this century, might be a different, more competent brand of humanity ,” speculates Bostrom.

Perhaps one of the most important goals of risk researchers for the moment is to raise awareness and stop humanity from walking headlong into potentially devastating situations. And they are succeeding. Policy makers and governments around the globe are finally starting to listen and actively seek advice from researchers like those at the FHI. In 2014 for instance, FHI researchers Toby Ord and Nick Beckstead wrote a chapter for the Chief Scientific Adviser’s annual report setting out how the government in the United Kingdom should evaluate and deal with existential risks posed by future technology. But the FHI’s reach is not limited to the United Kingdom. Sandberg was on the advisory board of the World Economic Forum to give guidance on the misuse of emerging technologies for the report that concludes a decade of global risk research published this year.

Despite the obvious importance of their work the team are still largely dependent on private donations. Their multidisciplinary and necessarily speculative work does not easily fall into the traditional categories of priority funding areas drawn up by mainstream funding bodies. In presentations, Bostrom has been known to show a graph that depicts academic interest for various topics, from dung beetles and Star Trek to zinc oxalate, which all appear to receive far greater credit than the FHI’s type of research concerning the continued existence of humanity. Bostrom laments this discrepancy between stakes and attention: “We can’t just muddle through and learn from experience and adapt. We have to anticipate and avoid existential risk. We only have one chance.”


“Creating something that is more powerful than the human
species just seems like the kind of thing to be careful about.”

It may seem gloomy to be faced every day with a graph that predicts the potential disasters that could befall us over the coming century, but instead, the researchers at the FHI believe that such a simple visual aid can stimulate people to face up to the potentially negative consequences of technological advances.

Despite being concerned about potential pitfalls, the FHI researchers are quick to agree that technological progress has made our lives measurably better over the centuries, and neither Bostrom nor any of the other researchers suggest we should try to stop it. “We are getting a lot of good things here, and I don’t think I would be very happy living in the Middle Ages,” says Sandberg, who maintains an unflappable air of optimism. He’s confident that we can foresee and avoid catastrophe. “We’ve solved an awful lot of other hard problems in the past,” he says.

Technology is already embedded throughout our daily existence and its role will only increase in the coming years. But by helping us all face up to what this might mean, the FHI hopes to allow us not to be intimidated and instead take informed advantage of whatever advances come our way. How does Bostrom see the potential impact of their research? “If it becomes possible for humanity to be more reflective about where we are going and clear-sighted where there may be pitfalls,” he says, “then that could be the most cost-effective thing that has ever been done.”

CSER: Playing with Technological Dominoes

Playing with Technological Dominoes
Advancing Research in an Era When Mistakes Can Be Catastrophic
by Sophie Hebden
April 7, 2015

Bookmark and Share

The new Centre for the Study of Existential Risk at Cambridge University isn’t really there, at least not as a physical place—not yet. For now, it’s a meeting of minds, a network of people from diverse backgrounds who are worried about the same thing: how new technologies could cause huge fatalities and even threaten our future as a species. But plans are coming together for a new phase for the centre to be in place by the summer: an on-the-ground research programme.


We learn valuable information by creating powerful
viruses in the lab, but risk a pandemic if an accident
releases it. How can we weigh the costs and benefits?

Ever since our ancestors discovered how to make sharp stones more than two and a half million years ago, our mastery of tools has driven our success as a species. But as our tools become more powerful, we could be putting ourselves at risk should they fall into the wrong hands— or if humanity loses control of them altogether. Concerned with bioengineered viruses, unchecked climate change, and runaway artificial intelligence? These are the challenges the Centre for the Study of Existential Risk (CSER) was founded to grapple with.

At its heart, CSER is about ethics and the value you put on the lives of future, unborn people. If we feel any responsibility to the billions of people in future generations, then a key concern is ensuring that there are future generations at all.

The idea for the CSER began as a conversation between a philosopher and a software engineer in a taxi. Huw Price, currently the Bertrand Russell Professor of Philosophy at Cambridge University, was on his way to a conference dinner in Copenhagen in 2011. He happened to share his ride with another conference attendee: Skype’s co-founder Jaan Tallinn.

“I thought, ’Oh that’s interesting, I’m in a taxi with one of the founders of Skype’ so I thought I’d better talk to him,” joked Price. “So I asked him what he does these days, and he explained that he spends a lot of his time trying to persuade people to pay more attention to the risk that artificial intelligence poses to humanity.”

“The overall goal of CSER is to write
a manual for managing and ameliorating
these sorts of risks in future.”
– Huw Price

In the past few months, numerous high-profile figures—including the founders of Google’s DeepMind machine-learning program and IBM’s Watson team—have been voicing concerns about the potential for high-level AI to cause unintended harms. But in 2011, it was startling for Price to find someone so embedded and successful in the computer industry taking AI risk seriously. He met privately with Tallinn shortly afterwards.

Plans came to fruition later at Cambridge when Price spoke to astronomer Martin Rees, the UK’s Astronomer Royal—a man well-known for his interest in threats to the future of humanity. The two made plans for Tallinn to come to the University to give a public lecture, enabling the three to meet. It was at that meeting that they agreed to establish CSER.

Price traces the start of CSER’s existence—at least online—to its website launch in June 2012. Under Rees’ influence, it quickly took on a broad range of topics, including the risks posed by synthetic biology, runaway climate change, and geoengineering.


Huw Price

“The overall goal of CSER,” says Price, painting the vision for the organisation with broad brush strokes, “Is to write a manual, metaphorically speaking, for managing and ameliorating these sorts of risks in future.”

In fact, despite its rather pessimistic-sounding emphasis on risks, CSER is very much pro-technology: if anything, it wants to help developers and scientists make faster progress, declares Rees. “The buzzword is ’responsible innovation’,” he says. “We want more and better-directed technology.”

Its current strategy is to use all its reputational power—which is considerable, as a Cambridge University institute—to gather experts together to decide on what’s needed to understand and reduce the risks. Price is proud of CSER’s impressive set of board members, which includes the world-famous theoretical physicist Stephen Hawking, as well as world leaders in AI, synthetic biology and economic theory.

He is frank about the plan: “We deliberately built an advisory board with a strong emphasis on people who are extremely well-respected to counter any perception of flakiness that these risks can have.”

The plan is working, he says. “Since we began to talk about AI risk there’s been a very big change in attitude. It’s become much more of a mainstream topic than it was two years ago, and that’s partly thanks to CSER.”

Even on more well-known subjects, CSER calls attention to new angles and perspectives on problems. Just last month, it launched a monthly seminar series by hosting a debate on the benefits and risks of research into potential pandemic pathogens.

The seminar focused on a controversial series of experiments by researchers in the Netherlands and the US to try to make the bird flu virus H5N1 transmissible between humans. By adding mutations to the virus they found it could transmit through the air between ferrets—the animal closest to humans when modelling the flu.

The answer isn’t “let’s shout at each
other about whether someone’s going
to destroy the world or not.” The right
answer is, “let’s work together to
develop this safely.”
– Sean O’hEigeartaigh, CSER Executive Director

Epidemiologist Marc Lipsitch of Harvard University presented his calculations of the ’unacceptable’ risk that such research poses, whilst biologist Derek Smith of Cambridge University, who was a co-author on the original H5N1 study, argued why such research is vitally important.

Lipsitch explained that although the chance of an accidental release of the virus is low, any subsequent pandemic could kill more than a billion people. When he combined the risks with the costs, he found that each laboratory doing a single year of research is the equivalent of causing at least 2,000 fatalities. He considers this risk unacceptable. Even if he’s only right within a factor of 1,000, he later told me, then the research is too dangerous.

Smith argued that we can’t afford not to do this research, that knowledge is power—in this case the power to understand the importance of the mutations and how effective our vaccines are at preventing further infections. Research, he said, is essential for understanding whether we need to start “spending millions on preparing for a pandemic that could easily arise naturally—for instance by stockpiling antiviral treatments or culling poultry in China.”

CSER’s seminar series brings the top minds to Cambridge to grapple with important questions like these. The ideas and relationships formed at such events grow into future workshops that then beget more ideas and relationships, and the network grows. Whilst its links across the Atlantic are strongest, CSER is also keen to pursue links with European researchers. “Our European links seem particularly interested in the bio-risk side,” says Price.


Sean O’hEigeartaigh

The scientific attaché to Germany’s government approached CSER in October 2013, and in September 2014 CSER co-organised a meeting with Germany on existential risk. This led to two other workshops on managing risk in biotechnology and research into flu transmission—the latter hosted by Volkswagen in December 2014.

In addition to working with governments, CSER also plans to sponsor visits from researchers and leaders in industry, exchanging a few weeks of staff time for expert knowledge at the frontier of developments. It’s an interdisciplinary venture to draw together and share different innovators’ ideas about the extent and time-frames of risks. The larger the uncertainties, the bigger the role CSER can play in canvassing opinion and researching the risk.

“It’s fascinating to me when the really top experts disagree so much,” says Sean O’hEigeartaigh, CSER’s Executive Director. Some leading developers estimate that human-level AI will be achieved within 30-40 years, whilst others think it will take as long as 300 years. “When the stakes are so high, as they are for AI and synthetic biology, that makes it even more exciting,” he adds.

Despite its big vision and successes, CSER’s path won’t be easy. “There’s a misconception that if you set up a centre with famous people then the University just gives you money; that’s not what happens,” says O’hEigeartaigh.

Instead, they’ve had to work at it, and O’hEigartaigh was brought on board in November 2012 to help grow the organization. Through a combination of grants and individual donors, he has attracted enough funding to install three postdocs, who will be in place by the summer of 2015. Some major grants are in the works, and if all goes well, CSER will be a considerably larger team in the next year.

With a research team on the ground, Price envisions a network of subprojects working on different aspects: listening to experts’ concerns, predicting the timescales and risks more accurately through different techniques, and trying to reduce some of the uncertainties—even a small reduction will help.

Rees believes there’s still a lot of awareness-raising work to do ’front-of-house’: he wants to see the risks posed by AI and synthetic biology become as mainstream as climate change, but without so much of the negativity.

“The answer isn’t ’let’s shout at each other about whether someone’s going to destroy the world or not’,” says O’hEigeartaigh. “The right answer is, ’let’s work together to develop this safely’.” Remembering the animated conversations in the foyer that buzzed with excitement following CSER’s seminar, I feel optimistic: it’s good to know some people are taking our future seriously.

GCRI: Aftermath

Aftermath
Finding practical paths to recovery after a worldwide catastrophe.
by Steven Ashley
March 13, 2015

Bookmark and Share


Tony Barrett
Global Catastrophic Risk Institute

OK, we survived the cataclysm. Now what?

In recent years, warnings by top scientists and industrialists have energized research into the sort of civilization-threatening calamities that are typically the stuff of sci-fi and thriller novels: asteroid impacts, supervolcanoes, nuclear war, pandemics, bioterrorism, even the rise of a super-smart, but malevolent artificial intelligence.

But what comes afterward? What happens to the survivors? In particular, what will they eat? How will they stay warm and find electricity? How will they rebuild and recover?

These “aftermath” issues comprise some of largest points of uncertainty regarding humanity’s gravest threats. And as such they constitute some of the principal research focuses of the Global Catastrophic Risk Institute (GCRI), a nonprofit think tank that Seth Baum and Tony Barrett founded in late 2011. Baum, a New York City-based engineer and geographer, is GCRI’s executive director. Barrett, who serves as its director of research, is a senior risk analyst at ABS Consulting in Washington, DC, which performs probabilistic risk assessment and other services.

Black Swan Events

At first glance, it may sound like GCRI is making an awful lot of fuss about dramatic worst-case scenarios that are unlikely to pan out any time soon. “In any given year, there’s only a small chance that one of these disasters will occur,” Baum concedes. But the longer we wait, he notes, the greater the chance that we will experience one of these “Black Swan events” (so called because before a black swan was spotted by an explorer in the seventeenth century, it was taken for granted that these birds did not exist). “We’re trying to instil a sense of urgency in governments and society in general that these risks need to be faced now to keep the world safe,” Baum says.

GCRI’s general mission is to find ways to mobilize the world’s thinkers to identify the really big risks facing the planet, how they might cooperate for optimal effect, and the best approaches to addressing the threats. The institute has no physical base, but it serves as a virtual hub, assembling “the best empirical data and the best expert judgment,” and rolling them into risk models that can help guide our actions, Barrett says. Researchers, brought together through GCRI, often collaborate remotely. Judging the real risks posed by these low-odds, high-consequence events is no simple task, he says: “In most cases, we are dealing with extremely sparse data sets about occurrences that seldom, if ever, happened before.”


Feeding Everyone No Matter What
Following a cataclysm that blocks out the sun, what will survivors eat?
Credit: J M Gehrke

Beyond ascertaining which global catastrophes are most likely to occur, GCRI seeks to learn how multiple events might interact. For instance, could a nuclear disaster lead to a change in climate that cuts food supplies while encouraging a pandemic caused by the loss of medical resources? “To best convey these all-too-real risks to various sectors of society, it’s not enough to merely characterize them,” Baum says. Tackling such multi-faceted scenarios requires an interdisciplinary approach that would enable GCRI experts to recognize potential shared mitigation strategies that could enhance the chances of recovery, he adds.

One of the more notable GCRI projects focuses on the aftermath of calamity. This analysis was conducted by research associate Dave Denkenberger, who is an energy efficiency engineer at Ecova, an energy and utility management firm in Durango, Colorado. Together with engineer Joshua M. Pearce, of Michigan Technological University in Houghton, he looked at a key issue: If one of these catastrophes does occur, how do we feed the survivors?

Worldwide, people currently eat about 1.5 billion tons of food a year. For a book published in 2014, Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, the pair researched alternative food sources that could be ramped up within five or fewer years following a disaster that involves a significant change in climate. In particular, the discussion looks at what could be done to feed the world should the climate suffer from an abrupt, single-decade drop in temperature of about 10°C that wipes out crops regionally, reducing food supplies by 10 per cent. This phenomenon has already occurred many times in the past.

Sun Block

Even more serious are scenarios that block the sun, which could cause a 10°C temperature drop globally in only a single year or so. Such a situation could arise should smoke enter the stratosphere from a nuclear winter resulting from an atomic exchange that burns big cities, an asteroid or comet impact, or a supervolcano eruption such as what may one day occur at Yellowstone National Park.

These risks need to be faced
now to keep the world safe.
– Seth Baum

Other similar, though probably less likely, scenarios, Denkenberger says, might derive from the spread of some crop-killing organism—a highly invasive superweed, a superbacterium that displaces beneficial bacteria, a virulent pathogenic bacterium, or a super pest (an insect). Any of these might happen naturally, but they could be even more serious should they result from a coordinated terrorist attack.

“Our approach is to look across disciplines to consider every food source that’s not dependent on the sun,” Denkenberger explains. The book considers various ways of converting vegetation and fossil fuels to edible food. The simplest potential solution may be to grow mushrooms on the dead trees, “but you could do much the same by using enzymes or bacteria to partially digest the dead plant fiber and then feed it to animals,” he adds. Ruminants including cows, sheep, goats, or more likely, faster-reproducing animals like rats, chickens or beetles could do the honors.


Seth Baum
Global Catastrophic Risk Institute

A more exotic solution would be to use bacteria to digest natural gas into sugars, and then eat the bacteria. In fact, a Danish company called Unibio is making animal feed from commercially stranded methane now.

Meanwhile, the U.S. Department of Homeland Security is funding another GCRI project that assesses the risks posed by the arrival of new technologies in synthetic biology or advanced robotics which might be co-opted by terrorists or criminals for use as weapons. “We’re trying to produce forecasts that estimate when these technologies might become available to potential bad actors,” Barrett says.

Focusing on such worst-case scenarios could easily dampen the spirits of GCRI’s researchers. But far from fretting, Baum says that he came to the world of existential risk (or ‘x-risk’) from his interest in the ethics of utilitarianism, which emphasizes actions aimed at maximizing total benefit to people and other sentient beings while minimizing suffering. As an engineering grad student, Baum even had a blog on utilitarianism. “Other people on the blog pointed out how the ethical views I was promoting implied a focus on the big risks,” he recalls. “This logic checked out and I have been involved with x-risks ever since.”

Barrett takes a somewhat more jaundiced view of his chosen career: “Oh yeah, we’re lots of fun at dinner parties…”

GCRI News Summaries

Here are the July and August global catastrophic risk news summaries, written by Robert de Neufville of the Global Catastrophic Risk Institute. The July summary covers the Iran deal, Russia’s new missile early warning system, dangers of AI, new Ebola cases, and more. The August summary covers the latest confrontation between North and South Korea, the world’s first low-enriched uranium storage bank, the “Islamic Declaration on Global Climate Change”, global food system vulnerabilities, and more.

Future of Life Institute Summer 2015 Newsletter

TOP DEVELOPMENTS

* $7M in AI research grants announced: We were delighted to announce the selection of 37 AI safety research teams which we plan to award a total of $7 million in funding. The grant program is funded by Elon Musk and the Open Philanthropy Project.

Max Tegmark, along with FLI grant recipients Manela Veloso and Thomas Dietterich, were interviewed on NPR’s On Point Radio for a lively discussion about our new AI safety research program.

* Open letter about autonomous weapons: FLI recently published an open letter advocating a global ban on offensive autonomous weapons development. Thousands of prominent scientists and concerned individuals are signatories, including Stephen Hawking, Elon Musk, the team at DeepMind, Yann LeCun (Director of AI Research, Facebook), Eric Horvitz (Managing Director, Microsoft Research), Noam Chomsky and Steve Wozniak.

Stuart Russell was interviewed about the letter on NPR’s All Things Considered (audio) and Al Jazeera America News(video).

* Open letter about economic impacts of AI: Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders have launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

 

EVENTS

* ITIF AI policy panel: Stuart Russell and MIRI Executive Director Nate Soares participated in a panel discussion about the risks and policy implications of AI (video here). The panel was hosted by the Information Technology & Innovation Foundation (ITIF), a Washington-based think tank focusing on the intersection of public policy & emerging technology.

* IJCAI 15: Stuart Russell presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.

* EA Global conferences: FLI co-founders Viktoriya Krakovna and Anthony Aguirre spoke at the Effective Altruism Global (EA Global) conference at Google headquarters in Mountain View, California. FLI co-founder Jaan Tallinn spoke at the EA Global Oxford conference on August 28-30.

* Stephen Hawking AMA: Professor Hawking is hosting an “Ask Me Anything” (AMA) conversation on Reddit. Users recently submitted questions here; his answers will follow in the near future.

 

OTHER UPDATES

* FLI anniversary video: FLI co-founder Meia Tegmark created an anniversary video highlighting our accomplishments from our first year.

* Future of AI FAQ: We’ve created a FAQ about the future of AI, which elaborates on the position expressed in our first open letter about AI development from January.

Chinese Scientists Report Unsuccessful Attempt to Selectively Edit Disease Gene in Human Embryos

Researchers from Sun Yat-sen University, Guangzhou failed to selectively modify a single gene in unicellular human embryos using the CRISPR/Cas9 technology, noting many off-target mutations. The study received a lot of media and public attention (NYT, Nature, TIME), primarily because of ethical concerns about human genetic modification expressed earlier.

The authors seem to ignore the opinion of many scientists in the US, including the original developers of CRISPR/Cas9 technology, who called for a pause in all human germline gene editing studies until the risks and benefits can be accessed by the public and the research community. This shows that international research community currently lacks power to discourage potentially dangerous or ethically questionable research if national governments choose to support it. However, it is important that the paper was rejected by Nature and Science (and, possibly, other journals) in part due to ethical considerations and had to be published in a much less prestigious Chinese journal Protein & Cell. This is a reason for optimism: if Science, Nature and other high-impact journals can coordinate on this, they might be able to cooperate in others cases of research of concern, such as Gain-of-Function studies.

While some think that the study shows that CRISPR gene editing has a long way to go before it is ready for the use in humans, this seems unlikely to me. Previous studies in mice, and, more importantly, in monkeys were very successful (in case of monkeys “no off-target mutagenesis was detected”). It seems more likely that the failure of the Chinese study was caused by the defective embryos – in an attempt to mitigate ethical concerns, the researchers used tripronuclear zygotes, which can’t develop normally. It may turn out that normal human embryos are much easier to modify and, given that, according to Nature, there are at least 4 other Chinese groups working on similar problems, we may find this out sooner than we might want.

April 2015 Newsletter

In the News

* The MIT Technology Review recently published a compelling overview of the possibilities surrounding AI, featuring Nick Bostrom’s Superintelligence and our open letter on AI research priorities.

+ For more news on our open letter, check out a thoughtful piece in Slate written by a colleague at the Future of Humanity Institute.

* FLI co-founder Meia Chita-Tegmark wrote a piece in the Huffington Post on public perceptions of AI and what it means for AI risk and research.

* Both Microsoft founder Bill Gates and Apple co-founder Steve Wozniak have recently joined the ranks of many AI experts and expressed concern about outcomes of superintelligent AI.

——————

Projects and Events

* We received nearly 300 applications for our global research program funded by Elon Musk! Thanks to hard work by a team of expert reviewers, we have now invited roughly a quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.

* Looking for the latest in x-risk news? Check out our just-launched news site, featuring blog posts and articles written by x-risk researchers, journalists and FLI volunteers!

* On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area. We endorse the CWG statement on the Creation of Potential Pandemic Pathogens – click here

Feeding Everyone No Matter What

Dr David Denkenberger is a research associate at the Global Catastrophic Risk Institute, and is the co-author of Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, published this year by Academic Press. In a guest post for the FLI blog, he summarizes the motivation for, and results behind, his work.

Mass human starvation is currently likely if global agricultural production is dramatically reduced for several years following a global catastrophe: e.g. super volcanic eruption, asteroid or comet impact, nuclear winter, abrupt climate change, super weed, super crop pathogen, super bacterium, or super crop pest. Even worse, such a catastrophe may cause the collapse of civilization, and recovery is not guaranteed. Therefore, this could affect many future generations.

The primary historic solution developed over the last several decades is increased food storage. However, storing up enough food to feed everyone would take a significant amount of time and would increase the price of food, killing additional people due to inadequate global access to affordable food. Humanity is far from doomed, however, in these situations – there are solutions.

In our new book Feeding Everyone No Matter What, we present a scientific approach to the practicalities of planning for long-term interruption to food production. The book provides an order of magnitude technical analysis comparing food requirements of all humans for five years with conversion of existing vegetation and fossil fuels to edible food. It presents mechanisms for global-scale conversion including: natural gas-digesting bacteria, extracting food from leaves, and conversion of fiber by enzymes, mushroom or bacteria growth, or a two-step process involving partial decomposition of fiber by fungi and/or bacteria and feeding them to animals such as beetles, ruminants (cows, deer, etc), rats and chickens. It includes an analysis to determine the ramp rates for each option and the results show that careful planning and global cooperation could feed everyone and preserve the bulk of biodiversity even in the most extreme circumstances.

The book also discusses options that may work on the household level. It encourages scientists and laypeople to perform alternate food growing and eating experiments, and to allow everyone to learn from them on http://www.appropedia.org/Feeding_Everyone_No_Matter_What.

feeding_everyone_no_matter_what

November 2014 Newsletter

In the News

* The winners of the essay contest we ran in partnership with the Foundational Questions Institute have been announced! Check out the awesome winning essays on the FQXi website.

* Financial Times ran a great article about artificial intelligence and the work of organizations like FLI, with thoughts from Elon Musk and Nick Bostrom.

* Stuart Russell offered a response in a featured conversation on Edge about “The Myth of AI”. Read the conversation here.

* Check out the piece in Computer World on Elon Musk and his comments on artificial intelligence.

* The New York Times featured a fantastic article about broadening perspectives on AI, featuring Nick Bostrom, Stephen Hawking, Elon Musk, and more.

* Our colleagues at the Future of Humanity Institute attended the “Biosecurity 2030” meeting in London and had this to report:

+ About 12 projects have been stopped in the U.S. following the White House moratorium on gain-of-function research.

+ One of the major H5N1 (bird flu) research groups still has not vaccinated its researchers against H5N1, even though this seems like an obvious safety protocol.

+ The bioweapons convention has no enforcement mechanism at all, and nothing comprehensive on dual-use issues.

—————

Projects and Events

* FLI advisory board member Martin Rees gave a great talk at the Harvard Kennedy School about existential risk. Check out the profile of the event in The Harvard Crimson newspaper.

—————

Other Updates

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

FLI launch event @ MIT

The Future of Technology: Benefits and Risks

FLI was officially launched Saturday May 24, 2014 at 7pm in MIT auditorium 10-250 – see videotranscript and photos below.

The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about what we can do now to improve the chances of reaping the benefits and avoiding the risks, moderated by Alan Alda and featuring George Church (synthetic biology), Ting Wu (personal genetics), Andrew McAfee (second machine age, economic bounty and disparity), Frank Wilczek (near-term AI and autonomous weapons) and Jaan Tallinn (long-term AI and singularity scenarios).

  • Alan Alda is an Oscar-nominated actor, writer, director, and science communicator, whose contributions range from M*A*S*H to Scientific American Frontiers.
  • George Church is a professor of genetics at Harvard Medical School, initiated the Personal Genome Project, and invented DNA array synthesizers.
  • Andrew McAfee is Associate Director of the MIT Center for Digital Business and author of the New York Times bestseller The Second Machine Age.
  • Jaan Tallinn is a founding engineer of Skype and philanthropically supports numerous research organizations aimed at reducing existential risk.
  • Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.
  • Ting Wu is a professor of Genetics at Harvard Medical School and Director of the Personal Genetics Education project.

 

Photos from the talk