An Explosion of CRISPR Developments in Just Two Months

 

A Battle Is Waged

A battle over CRISPR is raging through the halls of justice. Almost literally. Two of the key players in the development of the CRISPR technology, Jennifer Doudna and Feng Zhang, have turned to the court system to determine which of them should receive patents for the discovery of the technology. The fight went public in January and was amplified by the release of an article in Cell that many argued presented a one-sided version of the history of CRISPR research. Yet, among CRISPR’s most amazing feats is not its history, but how rapidly progress in the field is accelerating.

Justice_white_background

A CRISPR Explosion

CRISPR, which stands for clustered regularly-interspaced short palindromic repeats, is DNA used in the immune systems of prokaryotes. The system relies on the Cas9 enzyme* and guide RNA’s to find specific, problematic segments of a gene and cut them out. Just three years ago, researchers discovered that this same technique could be applied to humans. As the accuracy, efficiency, and cost-effectiveness of the system became more and more apparent, researchers and pharmaceutical companies jumped on the technique, modifying it, improving it, and testing it on different genetic issues.

Then, in 2015, CRISPR really exploded onto the scene, earning recognition as the top scientific breakthrough of the year by Science Magazine. But not only is the technology not slowing down, it appears to be speeding up. In just two months — from mid-November, 2015 to mid-January, 2016 — ten major CRISPR developments (including the patent war) have grabbed headlines. More importantly, each of these developments could play a crucial role in steering the course of genetics research.

 

Malaria


mosquito_white_background

CRISPR made big headlines in late November of 2015, when researchers announced they could possibly eliminate malaria using the gene-editing technique to start a gene drive in mosquitos. A gene drive occurs when a preferred version of a gene replaces the unwanted version in every case of reproduction, overriding Mendelian genetics, which say that each two representations of a gene should have an equal chance of being passed on to the next generation. Gene drives had long been a theory, but there was no way to practically apply the theory. Then, along came CRISPR. With this new technology, researchers at UC campuses in Irvine and San Diego were able to create an effective gene drive against malaria in mosquitos in their labs. Because mosquitos are known to transmit malaria, a gene drive in the wild could potentially eradicate the disease very quickly. More research is necessary, though, to ensure effectiveness of the technique and to try to prevent any unanticipated negative effects that could occur if we permanently alter the genes of a species.

 

Muscular Dystrophy

A few weeks later, just as 2015 was coming to an end, the New York Times reported that three different groups of researchers announced they’d successfully used CRISPR in mice to treat Duchenne muscular dystrophy (DMD), which, though rare, is among the most common fatal genetic diseases. With DMD, boys have a gene mutation that prevents the creation of a specific protein necessary to keep muscles from deteriorating. Patients are typically in wheel chairs by the time they’re ten, and they rarely live past their twenties due to heart failure. Scientists have often hoped this disease was one that would be well suited for gene therapy, but locating and removing the problematic DNA has proven difficult. In a new effort, researchers loaded CRISPR onto a harmless virus and either injected it into the mouse fetus or the diseased mice to remove the mutated section of the gene. While the DMD mice didn’t achieve the same levels of muscle mass seen in the control mice, they still showed significant improvement.

Writing for Gizmodo, George Dvorsky said, “For the first time ever, scientists have used the CRISPR gene-editing tool to successfully treat a genetic muscle disorder in a living adult mammal. It’s a promising medical breakthrough that could soon lead to human therapies.”

 

Blindness

Only a few days after the DMD story broke, researchers from the Cedars-Sinai Board of Governors Regenerative Medicine Institute announced progress they’d made treating retinitis pigmentosa, an inherited retinal degenerative disease that causes blindness. Using the CRISPR technology on affected rats, the researchers were able to clip the problematic gene, which, according to the abstract in Molecular Therapy, “prevented retinal degeneration and improved visual function.” As Shaomei Wang, one of the scientists involved in the project, explained in the press release, “Our data show that with further development, it may be possible to use this gene-editing technique to treat inherited retinitis pigmentosa in patients.” This is an important step toward using CRISPR  in people, and it follows soon on the heels of news that came out in November from the biotech startup, Editas Medicine, which hopes to use CRISPR in people by 2017 to treat another rare genetic condition, Leber congenital amaurosis, that also causes blindness.

 

Gene Control

January saw another major development as scientists announced that they’d moved beyond using CRISPR to edit genes and were now using the technique to control genes. In this case, the Cas9 enzyme is essentially dead, such that, rather than clipping the gene, it acts as a transport for other molecules that can manipulate the gene in question. This progress was written up in The Atlantic, which explained: “Now, instead of a precise and versatile set of scissors, which can cut any gene you want, you have a precise and versatile delivery system, which can control any gene you want. You don’t just have an editor. You have a stimulant, a muzzle, a dimmer switch, a tracker.” There are countless benefits this could have, from boosting immunity to improving heart muscles after a heart attack. Or perhaps we could finally cure cancer. What better solution to a cell that’s reproducing uncontrollably than a system that can just turn it off?

 

CRISPR Control or Researcher Control

But just how much control do we really have over the CRISPR-Cas9 system once it’s been released into a body? Or, for that matter, how much control do we have over scientists who might want to wield this new power to create the ever-terrifying “designer baby”?

robot_gene_editing

The short answer to the first question is: There will always be risks. But not only is CRISPR-Cas9 incredibly accurate, scientists didn’t accept that as good enough, and they’ve been making it even more accurate. In December, researchers at the Broad Institute published the results of their successful attempt to tweak the RNA guides: they had decreased the likelihood of a mismatch between the gene that the RNA was supposed to guide to and the gene that it actually did guide to. Then, a month later, Nature published research out of Duke University, where scientists had tweaked another section of the Cas9 enzyme, making its cuts even more precise. And this is just a start. Researchers recognize that to successfully use CRISPR-Cas9 in people, it will have to be practically perfect every time.

But that raises the second question: Can we trust all scientists to do what’s right? Unfortunately, this question was asked in response to research out of China in April, in which scientists used CRISPR to attempt to genetically modify non-viable human embryos. While the results proved that we still have a long way to go before the technology will be ready for real human testing, the fact that the research was done at all raised red-flags and shackles among genetics researchers and the press. These questions may have popped up back in March and April of 2015, but the official response came at the start of December when geneticists, biologists and doctors from around the world convened in Washington D. C. for the International Summit on Human Gene Editing. Ultimately, though, the results of the summit were vague, essentially encouraging scientists to proceed with caution, but without any outright bans. However, at this stage of research, the benefits of CRISPR likely outweigh the risks.

 

Big Pharma


biotech_big_pharma

“Proceed with caution” might be just the right advice for pharmaceutical companies that have jumped on the CRISPR bandwagon. With so many amazing possibilities to improve human health, it comes as no surprise that companies are betting, er, investing big money into CRISPR. Hundreds of millions of dollars flooded the biomedical start-up industry throughout 2015, with most going to two main players, Editas Medicine and Intellia Therapeutics. Then, in the middle of December, Bayer announced a joint venture with CRISPR Therapeutics to the tune of $300 million. That’s three major pharmaceutical players hoping to win big with a CRISPR gamble. But just how big of a gamble can such an impressive technology be? Well, every company is required to license the patent for a fee, but right now, because of the legal battles surrounding CRISPR, the original patents (which the companies have already licensed) have been put on hold while the courts try to figure out who is really entitled to them. If the patents change ownership, that could be a big game-changer for all of the biotech companies that have invested in CRISPR.

 

Upcoming Concerns?

On January 14, a British court began reviewing a request by the Frances Crick Institute (FCI) to begin genetically modified research on human embryos. While Britain’s requirements on human embryo testing are more lax than the U.S. — which has a complete ban on genetically modifying any human embryos — the British are still strict, requiring that the embryo be destroyed after the 14th day. The FCI requested a license to begin research on day-old, “spare” IVF embryos to develop a better understanding of why some embryos die at early stages in the womb, in an attempt to decrease the number of miscarriages women have. This germ-line editing research is, of course, now possible because of the recent CRISPR breakthroughs. If this research is successful, The Independent argues, “it could lead to pressure to change the existing law to allow so-called “germ-line” editing of embryos and the birth of GM children.” However, Dr. Kathy Niacin, the lead researcher on the project, insists this will not create a slippery slope to “designer babies.” As she explained to the Independent, ““Because in the UK there are very tight regulations in this area, it would be completely illegal to move in that direction. Our research is in line with what is allowed an in-keeping in the UK since 2009 which is purely for research purposes.”

Woolly Mammoths

Woolly Mammoths! What better way to end an article about how CRISPR can help humanity than with the news that it can also help bring back species that have gone extinct? Ok. Admittedly, the news that George Church wants to resurrect the woolly mammoth has been around since last spring. But the Huffington Post did a feature about his work in December, and it turns out his research has advanced enough now that he predicts the woolly mammoth could return in as little as seven years. Though this won’t be a true woolly mammoth. In fact, it will actually be an Asian elephant boosted by woolly mammoth DNA. Among the goals of the project is to help prevent the extinction of the Asian elephant, and woolly mammoth DNA could help achieve that. The idea is that a hybrid elephant would be able to survive more successfully as the climate changes. If this works, the method could be applied to other plants and animal species to increase stability and decrease extinction rates. As Church tells Huffington Post, “the fact is we’re not bringing back species — [we’re] strengthening existing species.”

woolly_mammoths

And what more could we ask of genetics research than to strengthen a species?

*Cas9 is only one of the enzymes that can work with the CRISPR system, but researchers have found it to be the most accurate and efficient.

Are Humans Dethroned in Go? AI Experts Weigh In

Today DeepMind announced a major AI breakthrough: they’ve developed software that can defeat a professional human player at the game of Go. This is a feat that has long eluded computers.

Francesca Rossi, a top AI scientist with IBM, told FLI, “AI researchers were waiting for computers to master Go, but we did not expect this to happen so soon. Compared to the chess-playing program DeepBlue, this result addresses what was believed to be a harder problem since in Go there are many more moves.”

Victoria Krakovna, a co-founder of FLI and AI researcher, agreed. “Go is a far more challenging game for computers than chess, with a combinatorial explosion of possible board positions, and many experts were not expecting AI to crack Go in the next decade,” she said.

Go is indeed a complex game, and the number of possible moves is astronomical — while chess has approximately 3580 possible sequences of moves, Go has around 250150. To put that in perspective, 3580 is a number too big to be calculated by a standard, non-graphing calculator, and it exceeds the number of atoms in our observable universe. So it’s no wonder most AI researchers expected close to a decade could pass before an AI system would beat some of the best Go players in the world.

Krakovna explained that DeepMind’s program, AlphaGo, tackled the problem with a combination of supervised learning and reinforcement learning. That is, human experts helped build knowledge of the game into the program, but then the program continued to learn through trial and error as it played against itself.

Berkeley AI professor Stuart Russell, co-author of the standard AI textbook, told us, “The result shows that the combination of deep reinforcement learning and so-called “value networks” that help the program decide which possibilities are worth considering leads to a very powerful system for Go.”

But just how big of a deal is this? For the results published in Nature, AlphaGo beat the European Go champion, Fan Hui, five to zero, however it’s not clear yet how the software would fare against the world champion. Rossi and Russell both weighed in.

Said Rossi, “The innovative techniques developed by DeepMind to achieve this result, that combine new machine learning approaches with search, seem to be general enough to be applicable also to other scenarios, not just Go or playing a game. This makes the result even more important and promising.”

However, as impressed as Russell is by these results, he wasn’t quite sure what to make of the program beating the European champion but not the world champion, given that elite Go is strongly dominated by Asian players such as Lee Se-dol. He explained, “The game had been considered one of the hardest to crack and this is a very impressive result. It’s hard to say yet whether this event is as big as the defeat of Kasparov, who was the human world champion [in chess, when he lost to Deep Blue in 1997]. Fan Hui is an excellent player but the current world champion is considerably stronger. On the other hand, Fan Hui didn’t win a single game, so I cannot predict with confidence that human supremacy will last much longer.”

It turns out, a match between world-champion Se-dol and AlphaGo will take place this coming March. An AI event to look forward to! 

We’re excited to add an edit to this article: Bart Selman, another top AI researcher, followed up with us, sending us his thoughts on this achievement.

Along with Russell and Rossi, Selman is equally impressed by the program’s ability to tackle a game so much more complicated than chess, but he also added, “AlphaGo is such exciting advance because it combines the strength of deep learning to discover subtle patterns in a large collection of board sequences with the latest clever game-space exploration techniques. So, it represents the first clear hybrid of deep learning with an algorithmic search method. Such merging of AI techniques has tremendous potential.

“In terms of novel AI and machine learning, this is a more significant advance than even IBM’s DeepBlue represented. On the other hand, in terms of absolute performance, DeepBlue still rules because it bested the best human player in the world. However, with DeepMind’s new learning based approach, it now seems quite likely that superhuman Go play is within reach. It will be exciting to follow AlphaGo’s upcoming matches.”

A survey of research questions for robust and beneficial AI

A collection of example projects and research questions within each area can be found here.

Research priorities for robust and beneficial AI

A summary of the research areas covered by our grants program can be found here.

若您对推动科技安全和有益发展感兴趣,我们诚意邀请您加入【生命未来研究所】【志愿者】的团队。在这里,你有机会和一群志愿者一起通过写作、翻译、推广活动和与专家/学者交流来催化对人类未来有益处的研究与题案。根据您的兴趣,志愿者有机会学习有关科技风险与安全的相关知识,并收获写作、资料收集和推广方面的经验。富有领导才能的志愿者在未来也有机会成为小组负责人。

欢迎您的加入,有兴趣者请联系lina@futureoflife.org

 

AI FAQs

Open Letter Autonomous Weapons

This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28.
Journalists who wish to see the press release may contact Toby Walsh.
Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact Max Tegmark.

AN OPEN LETTER FROM AI & ROBOTICS RESEARCHERS

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

List of signatories

You need javascript enabled to view the open letter signers.
You need javascript enabled to view the open letter signers.

Digital Economy Open Letter

An open letter by a team of economists about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact. (Jun 4, 2015)

AI Open Letter

(If you have questions about this letter, please contact Max Tegmark)

An Open Letter

Research Priorities for Robust and Beneficial Artificial Intelligence

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

List of signatories

Open letter signatories include:




Grants Timeline

Grants F.A.Q.

Grants RFP Overview

Predicting the Future (of Life)

It’s often said that the future is unpredictable. Of course, that’s not really true. With extremely high confidence, we can predict that the sun will rise in Santa Cruz, California at 7:12 am local time on Jan 30, 2016. We know the next total solar eclipse over the U.S. will be August 14, 2017, and we also know there will be one June 25, 2522. Read more

Is the Legal World Ready for AI?

Our smart phones are increasingly giving us advice and directions based on their best Internet searches. Driverless cars are rolling down the roads in many states. Increasingly complicated automation is popping up in nearly every industry. As exciting and beneficial as most these advancements are, problems will still naturally occur. Is the legal system keeping up with the changing technology?

Matt Scherer, a lawyer and legal scholar based out of Portland OR, is concerned that the legal system is not currently designed to absorb the unique problems that will likely arise with the rapid growth of artificial intelligence. In many cases, answering questions that seem simple (e.g. Who is responsible when something goes wrong?) turn out to be incredibly complex when artificial intelligence systems are involved.

Last year, Scherer wrote an article, soon be published in the Harvard Journal of Law and Technology, that highlights the importance of attempting to regulate this growing field, while also outlining the many challenges. Concerns about overreach and calls for regulations are common with new technologies, but “what is striking about AI […] is that many of the concerns are being voiced by leaders of the tech industry.”

In fact, it was many of these publicized concerns — by the likes of Elon Musk, Bill Gates, and Steve Wozniak — that led Scherer to begin researching AI from a legal perspective, and he quickly realized just how daunting AI law might be. As he says in the article, “The traditional methods of regulation—such as product licensing, research and development systems, and tort liability—seem particularly unsuited to managing the risks associated with intelligent and autonomous machines.” He goes on to explain that because so many people in so many geographical locations — and possibly even from different organizations — might be involved in the creation of some AI system, it’s difficult to predict what regulations would be necessary and most useful for any given system. Meanwhile the complexity behind the creation of the AI, when paired with the automation and machine learning of the system, could make it difficult to determine who is at fault if something catastrophic goes wrong.

Regulating something, such as AI, that doesn’t have a clear and concise definition poses its own unique problems.

Artificial intelligence is typically compared to human intelligence, which also isn’t well defined. As the article explains: “Definitions of intelligence thus vary widely and focus on a myriad of interconnected human characteristics that are themselves difficult to define, including consciousness, self-awareness, language use, the ability to learn, the ability to abstract, the ability to adapt, and the ability to reason.” This question of definition is further exacerbated by trying to understand what the intent of the machine (or system) was: “Whether and when a machine can have intent is more a metaphysical question than a legal or scientific one, and it is difficult to define “goal” in a manner that avoids requirements pertaining to intent and self-awareness without creating an over-inclusive definition.”

Scherer’s article goes into much more depth about the challenges of regulating AI. It’s a fascinating topic that we’ll also be covering in further detail in coming weeks. In the meantime, the article is a highly recommended read.

AI_law_harassment_comic

 

 

The Nuclear Weapons Threat and the Political Campaigns

After the December 15 Republican debate, Donald Trump caught some flack for not seeming to know what the nuclear triad is. But how familiar are most people with what the nuclear triad involves? Did you know that we still have nuclear weapons on hair-trigger alert — a relic of Cold War era policy — even though that probably increases the risk of nuclear war? And let’s not even get started on self-assured destruction (that’s right, self-assured, not mutually-assured).

Or rather, let’s do get started on it. One of the biggest reasons nuclear weapons are the threat they are today is because no one is talking about them. They only merited a few minutes discussion at that December debate, and few of the political candidates on either side of the aisle have taken much of a stance on them. Admittedly, North Korea and Iran have been in the news for their nuclear arsenals — or lack thereof — but how many Americans know much about their own country’s arsenal? Before we can truly move to a safer society, we need to discuss and understand where we are now.

Dr. David Wright, Co-Director of the Global Security Program for the Union of Concerned Scientists, sat down with me to discuss these issues and many more associated with the global nuclear threat. You can listen to the full podcast by clicking on the link at the top of this article, or by visiting us at SoundCloud.

However, whether you listen or not (you should listen…), there are some important points to take away:

As Wright mentions early in the interview:

“Some people in the military today say that the risk of an accidental nuclear war starting because of a mistaken launch is greater than any other type of start to a nuclear war.”

But nuclear weapons have plenty of safeguards to prevent an accidental launch of a nuclear weapon, right? Not necessarily. The purpose of a hair-trigger alert is to quickly — in a matter of minutes — launch nuclear weapons if we detect an incoming strike from another country. In such an event, the safeguards would be overridden. If we get a false reading, which has happened many times throughout our nuclear history, those safety features may not help. Wright mentions one particular instance from the early 1980s, but the Union of Concerned Scientists has also recorded dozens of these events.

But hair-trigger alert is necessary to prevent another country from launching a nuclear strike against us, right? Not really. The whole point of our nuclear triad is that we’re fully prepared to strike back. Even if our nuclear planes and silos were taken out in an attack, our nuclear submarines — which are difficult to detect in the ocean — would still be able to retaliate, which should ensure deterrence.

Wright argues that “nuclear weapons should only be used to deter the use of nuclear weapons by another country, and if necessary, to respond to the use of nuclear weapons against you.” This is known as sole purpose, which is contrary to the Obama administration’s stance, which left open some options for using nuclear weapons as a response to other threats, such as chemical weapons or biological weapons. When a country uses nuclear weapons for anything besides deterrence, then specialized weapons become necessary for each potential threat. The result is a huge nuclear arsenal that really shouldn’t be necessary and would hopefully never be used, yet costs tax payers tremendous amounts of money. A nuclear arsenal designed only to deter another nuclear attack would look much different and be significantly smaller.

In fact, as Wright points out, the Joint Chiefs of Staff reported that we could cut our own nuclear arsenal from about 1500 to 1000 weapons, and we would still be just as safe. Instead, we’re about to spend $1 trillion over the course of the next couple of decades to enhance our current nuclear weapons systems.

As we get closer to the U.S. primary elections, it would be good for all of the candidates to go back to square one and have a real discussion about the goals of a nuclear arsenal and what the country’s real concerns are. Said Wright, “If we’re worried about terrorism, nuclear weapons are not going to help with terrorism, so let’s take that off the table.” He argued that the candidates really need to discuss “what is it we actually need or want nuclear weapons to do and what does that lead to.” Most likely, that discussion would lead to a very different and significantly smaller nuclear weapons arsenal than the one we have now.

Another important nuclear risk to consider is that of self-assured destruction, also known as the nuclear winter theory. Initially, the idea behind deterrence was that one country would avoid attacking another out of fear of retaliation. However, as nuclear winter became more well understood, scientists and world leaders realized that even if only one country launches an attack, without the risk of retaliation, its own citizens would still be at risk. That’s because so much ash and soot and smoke and particles would block out the sun, sending global temperatures plummeting for many years. This would cause severe food shortages and mass starvation. Even a small nuclear war between India and Pakistan could lead to 1 billion deaths worldwide as a result of nuclear winter.

And speaking of extreme climate change, just how do the risks of our current state of climate change compare to the risks of nuclear war? Wright explains that it’s not the best comparison because nuclear war is lower risk but higher consequence (i.e. less likely to happen, but if it does, a lot more people will die). However, he also said:

“Nuclear weapons are harder to deal with because they’re somewhat invisible. A lot of people don’t realize that since the end of the Cold War that there are still about 15,000 nuclear weapons in the world, that those weapons are typically much larger — much, much larger — than the weapons used in Hiroshima and Nagasaki. And so it’s hard to get people to pay attention to this. It’s hard to get political will to even start to grapple with this problem.”

So why are nuclear weapons such a difficult problem to deal with, and why does the issue seem to be escalating again? Perhaps one part of the answer is that too often, people view nuclear weapons as the “ultimate safety net.” Yet, as Wright says:

“Today, nuclear weapons are a liability. They don’t address the key problems that we’re facing, like terrorism and things like that, and by having large numbers of them around … that you could have a very rapid cataclysm that people are, you know, reeling from forever.”

These are only a few of the highlights of the podcast, please listen to the full version here. And let’s start talking about this!

The Future of AI: Quotes and highlights from Monday’s NYU symposium

A veritable who’s who in artificial intelligence spent today discussing the future of their field and how to ensure it will be a good one. This exciting conference was organized by Yann LeCun, head of Facebook’s AI Research, together with a team of his colleagues at New York University. We plan to post a more detailed report once the conference is over, but in the mean time, here are some highlights from today.

One recurrent theme has been optimism, both about the pace at which AI is progressing and about it’s ultimate potential for making the world a better place. IBM’s Senior VP John Kelly said, Nothing I have ever seen matches the potential of AI and cognitive computing to change the world,” while Bernard Schölkopf, Director of the the Max Planck Institute for Intelligent Systems, argued that we are now in the cybernetic revolution. Eric Horvitz, Director of Microsoft Research, recounted how 25 years ago, he’d been inspired to join the company by Bill Gates saying  “I want to build computers that can see, hear and understand,” and he described how we are now making great progress toward getting there. NVIDIA founder Jen-Hsun Huang said, “AI is the final frontier [..] I’ve watched it hyped so many times, and yet, this time, it looks very, very different to me.”

In contrast, there was much less agreement about if or when we’d get human-level AI, which Demis Hassabis from DeepMind defined as “general AI – one system or one set of systems that can do all these different things humans can do, better.” Whereas Demis hoped for major progress within decades, AAAI President Tom Dietterich spoke extensively about the many remaining obstacles and Eric Horvitz cautioned that this may be quite far off, saying, we know so little about the magic of the human mind.” On the other hand, Bart Selman, AI Professor at Cornell, said, “within the AI community  […] there are a good number of AI researchers that can see systems that cover let’s say 90% of human intelligence within a few decades.

Murray Shanahan, AI professor at Imperial College, appeared to capture the consensus about what we know and don’t know about the timeline, arguing that are two common mistakes made “particularly by the media and the public.” The first, he explained, “is that human level AI, general AI, is just around the corner, that it’s just […] a couple of years away,” while the second mistake is “to think that it will never happen, or that it will happen on a timescale that is so far away that we don’t need to think about it very much.”

Amidst all the enthusiasm about the benefits of AI technology, many speakers also spoke about the importance of planning ahead to ensure that AI becomes a force for good. Eric Schmidt, former CEO of Google and now Chairman of its parent company Alphabet, urged the AI community to rally around three goals, which were also echoed by Demis Hassabis from DeepMind:

   1. AI should benefit the many, not the few (a point also argued by Emma Brunskill, AI professor at Carnegie Mellon).

   2. AI R&D should be open, responsible and socially engaged.  

   3. Developers of AI should establish best practices to minimize risks and maximize the beneficial impact.

The Wisdom Race Is Heating Up

There’s a race going on that will determine the fate of humanity. Just as it’s easy to miss the forest for all the trees, however, it’s easy to miss this race for all the scientific news stories about breakthroughs and concerns. What do all these headlines from 2015 have in common?

“AI masters 49 Atari games without instructions”
“Self-driving car saves life in Seattle”
“Pentagon Seeks $12Bn for AI Weapons”
“Chinese Team Reports Gene-Editing Human Embryos”
“Russia building Dr. Strangelove’s Cobalt bomb”

They are all manifestations of the aforementioned race heating up: the race between the growing power of technology and the growing wisdom with which we manage it. The power is growing because our human minds have an amazing ability to understand the world and to convert this understanding into game-changing technology. Technological progress is accelerating for the simple reason that breakthroughs enable other breakthroughs: as technology gets twice as powerful, if can often be used to used to design and build technology that is twice as powerful in turn, triggering repeated capability doubling in the spirit of Moore’s law.

What about the wisdom ensuring that our technology is beneficial? We have technology to thank for all the ways in which today is better than the Stone Age, but this not only thanks to the technology itself but also thanks to the wisdom with which we use it. Our traditional strategy for developing such wisdom has been learning from mistakes: We invented fire, then realized the wisdom of having fire alarms and fire extinguishers. We invented the automobile, then realized the wisdom of having driving schools, seat belts and airbags.

In other words, it was OK for wisdom to sometimes lag behind in the race, because it would catch up when needed. With more powerful technologies such as nuclear weapons, synthetic biology and future strong artificial intelligence, however, learning from mistakes is not a desirable strategy: we want to develop our wisdom in advance so that we can get things right the first time, because that might be the only time we’ll have. In other words, we need to change our approach to tech risk from reactive to proactive. Wisdom needs to progress faster.

This year’s Edge Question “What is the most interesting recent news and what makes it important?” is cleverly ambiguous, and can be interpreted either as call to pick a news item or as asking about the very definition of “interesting and important news.” If we define “interesting” in terms of clicks and Nielsen ratings, then top candidates must involve sudden change of some sort, whether it be a discovery or a disaster. If we instead define “interesting” in terms of importance for the future of humanity, then our top list should include even developments too slow to meet journalist’s definition of “news,” such as “Globe keeps warming.” In that case, I’ll put the fact that the wisdom race is heating up at the very top of my list. Why?

From my perspective as a cosmologist, something remarkable has just happened: after 13.8 billion years, our universe has finally awoken, with small parts of it becoming self-aware, marveling at the beauty around them, and beginning to decipher how their universe works. We, these self-aware life forms, are using our new-found knowledge to build technology and modify our universe on ever grander scales.

This is one of those stories where we get to pick our own ending, and there are two obvious ones for humanity to choose between: either win the wisdom race and enable life to flourish for billions of years, or lose the race and go extinct. To me, the most important scientific news is that after 13.8 billion years, we finally get to decide—probably within centuries or even decades.

Since the decision about whether to win the race sounds like such a no-brainer, why are we still struggling with it? Why is our wisdom for managing technology so limited that we didn’t do more about climate change earlier, and have come close to accidental nuclear war over a dozen times? As Skype-founder Jaan Tallinn likes to point out, it is because our incentives drove us to a bad Nash equilibrium. Many of humanity’s most stubborn problems, from destructive infighting to deforestation, overfishing and global warming, have this same root cause: when everybody follows the incentives they are given, it results in a worse situation than cooperation would have enabled.

Understanding this problem is the first step toward solving it. The wisdom we need to avoid lousy Nash equilibria must be developed at least in part by the social sciences, to help create a society where individual incentives are aligned with the welfare of humanity as a whole, encouraging collaboration for the greater good. Evolution endowed us with compassion and other traits to foster collaboration, and when more complex technology made these evolved traits inadequate, our forebears developed peer pressure, laws and economic systems to steer their societies toward good Nash equilibria. As technology gets ever more powerful, we need ever stronger incentives for those who develop, control and use it to make its beneficial use their top priority.

Although the social sciences can help, plenty of technical work is needed as well in order to win the race. Biologists are now studying how to best deploy (or not) tools such as CRISPR genome editing. 2015 will be remembered as the year when the beneficial AI movement went mainstream, engendering productive symposia and discussions at all the largest AI-conferences. Supported by many millions of dollars in philanthropic funding, large numbers of AI-researchers around the world have now started researching the fascinating technical challenges involved in keeping future AI-systems beneficial. In other words, the laggard in the all-important wisdom race gained significant momentum in 2015! Let’s do all we can to make future top news stories be about wisdom winning the race, because then we all win.

This article was originally posted on Edge.org in response to the question: “What do you consider the most interesting recent [scientific] news? What makes it important?”

North Korea’s Nuclear Test

North Korea claims that, on January 6, they successfully tested their first hydrogen bomb. Seismic analysis indicates that they did, in fact, test what was likely a nuclear bomb, but experts – and now the White House — dispute whether it was a real hydrogen bomb.

David Wright, Co-director of the Global Security Program for the Union of Concerned Scientists, said, “They [N. Korea] are claiming that it was a hydrogen test, and as far as I can tell, nobody believes it was a real, two-stage hydrogen bomb, which is the staple of the US and Russian and Chinese arsenals.”

This is the fourth nuclear test North Korea has conducted since 2006. The first three are suspected to have been atomic bombs, more similar to those used on Japan during World War II. The power from an atomic bomb comes from the uranium or plutonium molecules splitting apart in a process known as fission, and the result for a first-generation bomb is a yield that’s on the order of about 10-20 kilotons. This is consistent with the estimated yields of the first three nuclear tests North Korea conducted.

fission_North_korea

When a hydrogen bomb explodes, the atoms within fuse together, and the resulting yield can be significantly more powerful than that of a fission bomb — a fission bomb is actually used to ignite a hydrogen bomb. If the explosion on January 6 were a true hydrogen bomb, the resulting yield would have been about 1,000 times larger than the North’s earlier three tests.

“It appears from the numbers I’ve seen that the yield is very similar to what their recent yields were, maybe 5-10 kilotons, which, if that number holds up, is probably too small to be a true hydrogen bomb,” Wright explained.

But Wright also pointed out that the North might simply be using the term, hydrogen bomb, somewhat differently than we do in the west. If tritium, which is a radioactive isotope of hydrogen, is placed in the core of a standard uranium or plutonium atomic bomb, then when the bomb goes off, it will compress and ignite the tritium. In this case, the tritium fusion will emit a pulse of neutrons that will each trigger a fission reaction in the surrounding material, ensuring a more efficient use of the fissile explosives.

“It’s not what people typically mean by a hydrogen bomb,” Wright said, “but it does use some amount of fusion as a way of making the fission more effective. So that may have been what they did.”

North Korea has also been developing their long-range missile system, and a smaller weapon like this could be more easily placed in a long-range missile than a full-sized, traditional hydrogen bomb.

However, as experts work to determine just what type of bomb the North tested, Wright argues there’s a more important fact to consider: It’s still a nuclear bomb. Whether it’s a fission bomb or a fusion bomb, he says, “It doesn’t really matter that much, to the extent that you’re still talking about nuclear weapons. If you develop a way to deliver them to a city, you’re talking catastrophe.”

To learn more about North Korea’s nuclear test, we recommend the following articles:
North Korea nuclear: State claims first hydrogen bomb test,” by the BBC.

Timeline: How North Korea went nuclear,” by CNN.

Why is North Korea’s ‘hydrogen bomb’ test such a big deal?,” by the Washington Post.

Why MIRI Matters, and Other MIRI News

The Machine Intelligence Research Institute (MIRI) just completed its most recent round of fundraising, and with that Jed McCaleb wrote a brief post explaining why MIRI and their AI research is so important. You can find a copy of that message below, followed by MIRI’s January newsletter, which was put together by Rob Bensinger.

Jed McCaleb on Why MIRI Matters

A few months ago, several leaders in the scientific community signed an open letter pushing for oversight into the research and development of artificial intelligence, in order to mitigate the risks and ensure the societal benefit of the advanced technology. Researchers largely agree that AI is likely to begin outperforming humans on most cognitive tasks in this century.

Similarly, I believe we’ll see the promise of human-level AI come to fruition much sooner than we’ve fathomed. Its effects will likely be transformational — for the better if it is used to help improve the human condition, or for the worse if it is used incorrectly.

As AI agents become more capable, it becomes more important to analyze and verify their decisions and goals. MIRI’s focus is on how we can create highly reliable agents that can learn human values and the overarching need for better decision-making processes that power these new technologies.

The past few years has seen a vibrant and growing AI research community. As the space continues to flourish, the need for collaboration will continue to grow as well. Organizations like MIRI that are dedicated to security and safety engineering help fill this need. And, as a nonprofit, its research is free from profit obligations. This independence in research is important because it will lead to safer and more neutral results.

By supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good. For humanity’s benefit, we need to guarantee that AI systems can reliably pursue goals that are aligned with society’s human values. If organizations like MIRI are able to help engineer this level of technological advancement and awareness in AI systems, imagine the endless possibilities of how it can help improve our world. It’s critical that we put the infrastructure in place in order to ensure that AI will be used to make the lives of people better. This is why I’ve donated to MIRI, and why I believe it’s a worthy cause that you should consider as well.

January 2016 Newsletter

Research updates

General updates

News and links