Davos 2016 – The State of Artificial Intelligence

An interesting discussion at Davos 2016 on the current state of artificial intelligence, featuring Stuart Russell, Matthew Grob, Andrew Moore, and Ya-Qin Zhang:

Dr. David Wright on North Korea’s Satellite

Earlier this month, Dr. David Wright, co-director of the Union of Concerned Scientists Global Security Program, wrote two posts about North Korea’s satellite launch. While North Korea isn’t currently thought to pose an existential risk with their weapons, any time nuclear weapons are involved, the situation has the potential to quickly escalate to something that could be catastrophic to the future of humanity. We’re grateful to Wright and the UCS for allowing us to share his posts here.

North Korea is Launching a Rocket Soon: What Do We Know About It?

North Korea has announced that it will launch a rocket sometime in the next two weeks to put a satellite in orbit for the second time. What do we know about it, and how worried should we be?

Fig.

Fig.1. The Unha-3 ready to launch in April 2012. (Source: Sungwon Baik / VOA)

What We Know

North Korea has been developing rockets—both satellite launchers and ballistic missiles—for more than 25 years. Developing rockets requires flight testing them in the atmosphere, and the United States has satellite-based sensors and ground-based radars that allow it to detect flight testing essentially worldwide. So despite North Korea being highly secretive, it can’t hide such tests, and we know what rockets it has flight tested.

North Korea’s military has short-range missiles that can reach most of South Korea, and a longer range missile—called Nodong in the West—that can reach parts of Japan. But it has yet to flight test any military missiles that can reach targets at a distance of greater than about 1,500 kilometers.

(It has two other ballistic missile designs—called the Musudan and KN-08 in the West—that it has exhibited in military parades on several occasions over the past few years, but has never flight tested. So we don’t know what their state of development is, but they can’t be considered operational without flight testing.)

North Korea’s Satellite launcher

North Korea has attempted 5 satellite launches, starting in 1998, with only one success—in December 2012. While that launch put a small satellite into space, the satellite was apparently tumbling and North Korea was never able to communicate with it.

The rocket that launched the satellite in 2012 is called the Unha-3 (Galaxy-3) (Fig. 1). North Korea has announced locations of the splashdown zones for its upcoming launch, where the rocket stages will fall into the sea; since these are very similar to the locations of the zones for its 2012 launch, that suggests the launcher will also be very similar (Fig. 2).

Fig. Fig. 2. The planned trajectory of the upcoming launch. (Source: D Wright in Google Earth)

Fig. 2. The planned trajectory of the upcoming launch. (Source: D Wright in Google Earth)

We know a lot about the Unha-3 from analyzing previous launches, especially after South Korea fished parts of the rocket out of the sea after the 2012 launch. It is about 30 m tall, has a launch mass of about 90 tons, and consists of 3 stages that use liquid fuel. A key point is that the two large lower stages rely on 1960s-era

Scud-type engines and fuel, rather than the more advanced engines and fuel that countries such as  Russia and China use. This is an important limitation on the capability of the rocket and suggests North Korea does not have access to, or has not mastered, more advanced technology.

(Some believe North Korea may have purchased a number of these more advanced engines from the Soviet Union. But it has never flight tested that technology, even in shorter range missiles.)

Because large rockets are highly complex technical systems, they are prone to failure. Just because North Korea was able to get everything to work in 2012, allowing it to orbit a satellite, that says very little about the reliability of the launcher, so it is unclear what the probability of a second successful launch is.

The Satellite

The satellite North Korea launched in 2012— the  Kawngmyongsong-3, or “Bright Star 3”—is likely similar in size and capability (with a mass of about 100 kg) to the current satellite (also called Kawngmyongsong ). The satellite is not designed to do much, since the goal of early satellite launches is learning to communicate with the satellite. It may send back photos from a small camera on board, but these would be too low resolution (probably hundreds of meters) to be useful for spying.

In 2012, North Korea launched its satellite into a “sun-synchronous orbit” (with an inclination of 97.4 degrees), which is an orbit commonly used for satellites that monitor the earth, such as for environmental monitoring. Its orbital altitude was about 550 km, which is twice as high as the Space Station, but lower than most satellites, which sit in higher orbits since atmospheric drag at low altitudes will slow a satellite and cause it to fall from orbit sooner. For North Korea, the altitude was limited by the capability of its launcher. We expect a similar orbit this time, although if the launcher has been modified to carry somewhat more fuel it might be able to carry the satellite to a higher altitude.

The Launch Site and Flight Path

The launch will take place from the Sohae site near the western coast of North Korea (Fig. 2). It would be most efficient to launch due east so that the rocket gains speed from the rotation of the earth. North Korea launched its early flights in that direction but now launches south to avoid overflying Japan—threading the needle between South Korea, China, and the Philippines.

North Korea has modified the Sohae launch site since the 2012 launch. It has increased the height of the gantry that holds the rocket before launch, so that it can accommodate taller rockets, but I expect that extra height will not be needed for this rocket. It has also constructed a building on the launch pad that houses the rocket while it is being prepared for launch (which is a standard feature of modern launch sites around the world). This means we will not be able to watch the detailed launch preparations, which gave indications of the timing of the launch in 2012.

Satellite launch or ballistic missile?

So, is this really an attempt to launch a satellite, or could it be a ballistic missile launch in disguise? Can you tell the difference?

Fig. 3. Trajectories for a long-range ballistic missile (red) and Unha-3 satellite launch (blue).

Fig. 3. Trajectories for a long-range ballistic missile (red) and Unha-3 satellite launch (blue).

The U.S. will likely have lots of sensors—on satellites, in the air, and on the ground and sea—watching the launch, and it will be able to quickly tell whether or not it is really a satellite launch because the trajectory of a satellite launch and ballistic missile are very different.

Figure 3 shows the early part of the trajectory of a typical liquid-fueled ballistic missile (ICBM) with a range of 12,000 km (red) and the Unha-3 launch trajectory from 2012 (blue). They differ in shape and in the length of time the rocket engines burn. In this example, the ICBM engines burn for 300 seconds and the Unha-3 engines burn for nearly twice that long. The ICBM gets up to high speed much faster and then goes much higher.

Interestingly, the Unha-3’s longer burn time means that its upper stages have been designed for use in a satellite launcher, rather than a ballistic missile. So this rocket looks more like a satellite launcher than a ballistic missile.

Long-Range Missile Capability?

Of course, North Korea can still learn a lot from satellite launches about the technology it can use to build a ballistic missile, since the two types of rockets use the same basic technology. That is the source of the concern about these launches.

The range of a missile is based on the technology used and other factors. Whether theUnha-3 could carry a nuclear warhead depends in part on how heavy a North Korea nuclear weapon is, which is a topic of ongoing debate. If the Unha were modified to carry a 1,000 kg warhead rather than a light satellite, the missile could have enough range to reach Alaska and possibly Hawaii, but might not be able to reach the continental U.S. (Fig. 4). If instead North Korea could reduce the warhead mass to around 500 kg, the missile would likely be able to reach large parts of the continental U.S.

North Korea has not flight tested a ballistic missile version of the Unha or a reentry heat shield that would be needed to protect the warhead as it reentered the atmosphere. Because of its large size, such a missile is unlikely to be mobile, and assembling and fueling it at the launch site would be difficult to hide. Its accuracy would likely be many kilometers.

Fig. 4: Distances from North Korea. (Source: D Wright in Google Earth)

Fig. 4: Distances from North Korea. (Source: D Wright in Google Earth)

The bottom line is that North Korea is developing the technology it could use to build a ballistic missile with intercontinental range. Today it is not clear that it has a system capable of doing so or a nuclear weapon that is small enough to be delivered on it. It has shown, however, the capability to continue to make progress on both fronts.

The U.S. approach to dealing with North Korea in recent years through continued sanctions has not been effective in stopping this progress. It’s time for the U.S. to try a different approach, including direct U.S.-Korean talks.

North Korea Successfully Puts Its Second Satellite in Orbit

North Korea launched earlier than expected, and successfully placed its second satellite into orbit.

The launch took place at 7:29 pm EST Saturday, Feb. 6, U.S. time, which was 8:59 am local time on Sunday in North Korea. It originally said its launch window would not start until Feb. 8. Apparently the rocket was ready and the weather was good for a launch.

The U.S. office that tracks objects in space, the Joint Space Operations Center (JSPOC), announced a couple hours later that it was tracking two objects in orbit—the satellite and the third stage of the launcher. The satellite was in a nearly circular orbit (466 x 501 km). The final stage maneuvered to put it in a nearly polar, sun-synchronous orbit, with an inclination of 97.5 degrees.

Because the satellite orbit and other details of the launch were similar to those of North Korea’s last launch, in December 2012, this implies that the launch vehicle was also very similar.

This post from December 2012 allows you to see the launch trajectory in 3D using Google Earth.

South Korea is reporting that after the first stage burned out and was dropped from the rocket, it exploded before reaching the sea. This may have been intended to prevent it from being recovered and studied, as was the first stage of its December 2012 launch.

The satellite, called the Kwangmyongsong-4, is likely very similar to the satellite launched three years ago. It will likely not be known for several days whether, unlike the 2012 satellite, it can stop tumbling in orbit and communicate with the ground. It is apparently intended to stay in orbit for about 4 years.

If it can communicate with the Kwangmyongsong-4, North Korea will learn about operating a satellite in space. Even if not, it gained experience with launching and learned more about the reliability of its rocket systems.

For more information about the launch, see my earlier post.

Note added: Feb. 7, 1:00 am

The two orbiting objects, the satellite and the third-stage rocket body, show up in the NORAD catalog of space objects as numbers 41332 for the satellite and 41333 for the rocket body. (Thanks to Jonathan McDowell for supplying these.)

X-risk News of the Week: AAAI, Beneficial AI Research, a $5M Contest, and Nuclear Risks

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

The highlights of this week’s news are all about research. And as is so often the case, research brings hope. Research can help us cure disease, solve global crises, find cost-effective solutions to any number of problems, and so on. The research news this week gives hope that we can continue to keep AI beneficial.

First up this week was the AAAI conference. As was mentioned in an earlier post, FLI participated in the AAAI workshop, AI, Ethics, and Safety. Eleven of our grant winners presented their research to date, for an afternoon of talks and discussion that focused on building ethics into AI systems, ensuring safety constraints are in place, understanding how and when things could go wrong, ensuring value alignment between humans and AI, and much more. There was also a lively panel discussion about new ideas for future AI research that could help ensure AI remains safe and beneficial.

The next day, AAAI President, Tom Dietterich (also an FLI grant recipient), delivered his presidential address with a focus on enabling more research into robust AI. He began with a Marvin Minsky quote, in which Minsky explained that when a computer encounters an error, it fails, whereas when the human brain encounters an error, it tries another approach. And with that example, Dietterich launched into his speech about the importance of robust AI and ensuring that an AI can address the various known and unknown problems it may encounter. While discussing areas in which AI development is controversial, he also made a point to mention his opposition to autonomous weapons, saying, “I share the concerns of many people that I think the development of autonomous offensive weapons, without a human in the loop, is a step that we should not take.”

AAAI also hosted a panel this week on the economic impact of AI, which included FLI Scientific Advisory Board members, Nick Bostrom and Erik Brynjofsson, as well as an unexpected appearance by FLI President, Max Tegmark. As is typical of such discussions, there was a lot of concern about the future of jobs and how average workers will continue to make a living. However, the TechRepublic noted that both Bostrom and Tegmark are hopeful that if we plan appropriately, then the increased automation could greatly improve our standard of living. As the TechRepublic reported:

“’Perhaps,’ Bostrom said, ‘we should strive for things outside the economic systems.’ Tegmark agreed. ‘Maybe we need to let go of the obsession that we all need jobs.’”

Also this week, IBM and the X Prize Foundation announced a $5 million collaboration, in which IBM is encouraging developers and researchers to use Watson as the base for creating “jaw-dropping, awe-inspiring” new technologies that will be presented during TED2020. There will be interim prizes for projects leading up to that event, while the final award will be presented after the TED2020 talks. As they explain on the X Prize page:

“IBM believes this competition can accelerate the creation of landmark breakthroughs that deliver new, positive impacts to peoples’ lives, and the transformation of industries and professions.

We believe that cognitive technologies like Watson represent an entirely new era of computing, and that we are forging a new partnership between humans and technology that will enable us to address many of humanity’s most significant challenges — from climate change, to education, to healthcare.”

Of course, not all news can be good news, and so the week’s highlights end with a reminder about the increasing threat of nuclear weapons. Last week, the Union of Concerned Scientists published a worrisome report about the growing concern that a nuclear war is becoming more likely. Among other things, the report considers the deteriorating relationship between Russia and the U.S., as well as the possibility that China may soon implement a hair-trigger-alert policy for their own nuclear missiles.

David Wright, co-director of the UCS Global Security Program, recently wrote a blog post about the report. Referring to first the U.S.-Russia concern and then the Chinese nuclear policy, he wrote:

“A state of heightened tension changes the context of a false alarm, should one occur, and tends to increase the chance that the warning will be seen as real. Should China’s political leaders agree with this change, it would be a dangerous shift that would increase the chance of an accidental or mistaken launch at the United States.”

Update: Another FLI grant winner, Dr. Wendell Wallach, made news this week for his talk at the Association for the Advancement of Science, in which he put forth a compromise for addressing the issue of autonomous weapons. According to Defense One, Wallach laid out three ideas:

“1) An executive order from the president proclaiming that lethal autonomous weapons constitute a violation of existing international humanitarian law.”

“2) Create an oversight and governance coordinating committee for AI.”

“3) Direct 10 percent of the funding in artificial intelligence to studying, shaping, managing and helping people adapt to the “societal impacts of intelligent machines.”

AAAI Safety Workshop Highlights: Debate, Discussion, and Future Research

The 30th annual Association for the Advancement of Artificial Intelligence (AAAI) conference kicked off on February 12 with two days of workshops, followed by the main conference, which is taking place this week. FLI is honored to have been a part of the AI, Ethics, and Safety Workshop that took place on Saturday, February 13.

phoenix_convention_center1

Phoenix Convention Center where AAAI 2016 is taking place.

The workshop featured many fascinating talks and discussions, but perhaps the most contested and controversial was that by Toby Walsh, titled, “Why the Technological Singularity May Never Happen.”

Walsh explained that, though general knowledge has increased, human capacity for learning has remained relatively consistent for a very long time. “Learning a new language is still just as hard as it’s always been,” he said, to provide an example. If we can’t teach ourselves how to learn faster he doesn’t see any reason to believe that machines will be any more successful at the task.

He also argued that even if we assume that we can improve intelligence, there’s no reason to assume it will increase exponentially, leading to an intelligence explosion. He believes it is just as possible that machines could develop intelligence and learning that increases by half for each generation, thus it would increase, but not exponentially, and it would be limited.

Walsh does anticipate superintelligent systems, but he’s just not convinced they will be the kind that can lead to an intelligence explosion. In fact, as one of the primary authors of the Autonomous Weapons Open Letter, Walsh is certainly concerned about aspects of advanced AI, and he ended his talk with concerns about both weapons and job loss.

Both during and after his talk, members of the audience vocally disagreed, providing various arguments about why an intelligence explosion could be likely. Max Tegmark drew laughter from the crowd when he pointed out that while Walsh was arguing that a singularity might not happen, the audience was arguing that it might happen, and these “are two perfectly consistent viewpoints.”

Tegmark added, “As long as one is not sure if it will happen or it won’t, it’s wise to simply do research and plan ahead and try to make sure that things go well.”

As Victoria Krakovna has also explained in a previous post, there are other risks associated with AI that can occur without an intelligence explosion.

The afternoon portion of the talks were all dedicated to technical research by current FLI grant winners, including Vincent Conitzer, Fuxin Li, Francesca Rossi, Bas Steunebrink, Manuela Veloso, Brian Ziebart, Jacob Steinhardt, Nate Soares, Paul Christiano, Stefano Ermon, and Benjamin Rubinstein. Topics ranged from ensuring value alignments between humans and AI to safety constraints and security evaluation, and much more.

While much of the research presented will apply to future AI designs and applications, Li and Rubinstein presented examples of research related to image recognition software that could potentially be used more immediately.

Li explained the risks associated with visual recognition software, including how someone could intentionally modify the image in a human-imperceptible way to make it incorrectly identify the image. Current methods rely on machines accessing huge quantities of images to reference and learn what any given image is. However, even the smallest perturbation of the data can lead to large errors. Li’s own research looks at unique ways for machines to recognize an image, thus limiting the errors.

Rubinstein’s focus is geared more toward security. The research he presented at the workshop is similar to facial recognition, but goes a step farther, to understand how small changes made to one face can lead systems to confuse the image with that of someone else.

Fuxin Li

Fuxin Li

rubinstein_AAAI

Ben Rubinstein

 

 

AAAI_panel

Future of beneficial AI research panel: Francesca Rossi, Nate Soares, Tom Dietterich, Roman Yampolskiy, Stefano Ermon, Vincent Conitzer, and Benjamin Rubinstein.

The day ended with a panel discussion on the next steps for AI safety research that also drew much debate between panelists and the audience. The panel included AAAI president, Tom Dietterich, as well as Rossi, Soares, Conitzer, Ermon, Rubinstein, and Roman Yampolskiy, who also spoke earlier in the day.

Among the prevailing themes were concerns about ensuring that AI is used ethically by its designers, as well as ensuring that a good AI can’t be hacked to do something bad. There were suggestions to build on the idea that AI can help a human be a better person, but again, concerns about abuse arose. For example, an AI could be designed to help voters determine which candidate would best serve their needs, but then how can we ensure that the AI isn’t secretly designed to promote a specific candidate?

Judy Goldsmith, sitting in the audience, encouraged the panel to consider whether or not an AI should be able to feel pain, which led to extensive discussion about the pros and cons of creating an entity that can suffer, as well as questions about whether such a thing could be created.

Francesca_Nate

Francesca Rossi and Nate Soares

Tom_Roman

Tom Dietterich and Roman Yampolskiy

After an hour of discussion many suggestions for new research ideas had come up, giving researchers plenty of fodder for the next round of beneficial-AI grants.

We’d also like to congratulate Stuart Russell and Peter Norvig who were awarded the 2016 AAAI/EAAI Outstanding Educator Award for their seminal text “Artificial Intelligence: A Modern Approach.” As was mentioned during the ceremony, their work “inspired a new generation of scientists and engineers throughout the world.”

Norig_Russell_3

Congratulations to Peter Norvig and Stuart Russell!

Who’s to Blame (Part 3): Could Autonomous Weapon Systems Navigate the Law of Armed Conflict

“Robots won’t commit war crimes.  We just have to program them to follow the laws of war.”  This is a rather common response to the concerns surrounding autonomous weapons, and it has even been advanced as a reason that robot soldiers might be less prone to war crimes than human soldiers.  But designing such autonomous weapon systems (AWSs) is far easier said than done.  True, if we could design and program AWSs that always obeyed the international law of armed conflict (LOAC), then the issues raised in the previous segment of this series — which suggested the need for human direction, monitoring, and control of AWSs — would be completely unfounded. But even if such programming prowess is possible, it seems unlikely to be achieved anytime soon. Instead, we need be prepared for powerful AWS that may not recognize where the lines blur between what is legal and reasonable during combat and what is not.

While the basic LOAC principles seem straightforward at first glance, their application in any given military situation depends heavily on the specific circumstances in which combat takes place. And the difference between legal and illegal acts can be blurry and subjective.  It therefore would be difficult to reduce the laws and principles of armed conflict into a definite and programmable form that could be encoded into the AWS and, from which the AWS could consistently make battlefield decisions that comply with the laws of war.

Four core principles guide LOAC: distinction, military necessity, unnecessary suffering, and proportionality.  Distinction means that participants in an armed conflict must distinguish between military and civilian personnel (and between military and civilian objects) and limit their attacks to military targets.  It follows that an attack must be justified by military necessity–i.e., the attack, if successful, must give the attacker some military advantage.  The next principle, as explained by the International Committee of the Red Cross, is that combatants must not “employ weapons, projectiles material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.”  Unlike the other core principles, the principle of unnecessary suffering generally protects combatants to the same extent as civilians.  Finally, proportionality dictates that the harm done to civilians and civilian property must not be excessive in light of the military advantage expected to be gained by an attack.

For a number of reasons, it would be exceedingly difficult to ensure that an AWS would consistently comply with these requirements if it were permitted to select and engage targets without human input.  One reason is that it would be difficult for an AWS to gather all objective information relevant to making determinations of the core LOAC principles.  For example, intuition and experience might allow a human soldier to infer from observing minute details of his surroundings–such as seeing a well-maintained children’s bicycle or detecting the scent of recently cooked food–that civilians may be nearby.  It might be difficult to program an AWS to pick up on such subtle insignificant clues, even though those clues might be critical to assessing whether a targeted structure contains civilians (relevant to distinction and necessity) or whether engaging nearby combatants might result in civilian casualties (relevant to proportionality).

But there is an even more fundamental and vexing challenge in ensuring that AWSs comply with LOAC: even if an AWS were somehow able to obtain all objective information relevant to the LOAC implications of a potential military engagement, all of the core LOAC principles are subjective to some degree.  For example, the operations manual of the US Air Force Judge Advocate General’s Office states that “roportionality in attack is an inherently subjective determination that will be resolved on a case-by-case basis.”  This suggests that proportionality is not something that simply can be reduced to a formula or otherwise neatly encoded so that an AWS would never launch disproportionate attacks.  It would be even more difficult to formalize the concept of “military necessity,” which is fiendishly difficult to articulate without getting tautological and/or somehow incorporating the other LOAC principles.

The principle of distinction might seem fairly objective–soldiers are fair game, civilians are not.  But it can even be difficult–sometimes exceptionally so–to determine whether a particular individual is a combatant or a civilian.  The Geneva Conventions state that civilians are protected from attack “unless and for such time as they take a direct part in hostilities.”  But how “direct” must participation in hostilities be before a civilian loses his or her LOAC protection?  A civilian in an urban combat area who picks up a gun and aims it at an enemy soldier clearly has forfeited his civilian status.  But what about a civilian in the same combat zone who is acting as a spotter?  Who is transporting ammunition from a depot to the combatants’ posts?  Who is repairing an enemy Jeep?  Do these answers change if the combat zone is in a desert instead of a city?  Given that humans frequently disagree on where the boundary between civilians and combatants should lie, it would be difficult to agree on an objective framework that would allow an AWS to accurately distinguish between civilians and combatants in the myriad scenarios it might face on the battlefield.

Of course, humans can also have great difficulty in making such determinations–and humans have been known to intentionally violate LOAC’s core principles, a rather significant drawback to which AWSs might be more resistant.  But when a human commits a LOAC violation, that human being can be brought to justice and punished. Who would be held responsible if an AWS attack violates those same laws?  As of now, that is far from clear.  That accountability problem will be the subject of the next entry in this series.

X-risk News of the Week: Nuclear Winter and a Government Risk Report

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

The big news this week landed squarely in the x-risk end of the spectrum.

First up was a New York Times op-ed titled, Let’s End the Peril of a Nuclear Winter, and written by climate scientists, Drs. Alan Robock and Owen Brian Toon. In it, they describe the horrors of nuclear winter — the frigid temperatures, the starvation, and the mass deaths — that could terrorize the entire world if even a small nuclear war broke out in one tiny corner of the globe.

Fear of nuclear winter was one of the driving forces that finally led leaders of Russia and the US to agree to reduce their nuclear arsenals, and concerns about nuclear war subsided once the Cold War ended. However, recently, leaders of both countries have sought to strengthen their arsenals, and the threat of a nuclear winter is growing again. While much of the world struggles to combat climate change, the biggest risk could actually be that of plummeting temperatures if a nuclear war were to break out.

In an email to FLI, Robock said:

“Nuclear weapons are the greatest threat that humans pose to humanity.  The current nuclear arsenal can still produce nuclear winter, with temperatures in the summer plummeting below freezing and the entire world facing famine.  Even a ‘small’ nuclear war, using less than 1% of the current arsenal, can produce starvation of a billion people.  We have to solve this problem so that we have the luxury of addressing global warming.

 

Also this week, the Senate Armed Services Committee, led by James Clapper, released the Worldwide Threat Assessment of the US Intelligence Community for 2016. The document is 33 pages of potential problems the government is most concerned about in the coming year, a few of which can fall into the category of existential risks:

  1. The Internet of Things (IoT). Though this doesn’t technically pose an existential risk, it does have the potential to impact quality of life and some of the freedoms we typically take for granted. The report states: “In the future, intelligence services might use the IoT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.”
  2. Artificial Intelligence. Clapper’s concerns are broad in this field. He argues: “Implications of broader AI deployment include increased vulnerability to cyberattack, difficulty in ascertaining attribution, facilitation of advances in foreign weapon and intelligence systems, the risk of accidents and related liability issues, and unemployment. The increased reliance on AI for autonomous decision making is creating new vulnerabilities to cyberattacks and influence operations. AI systems are susceptible to a range of disruptive and deceptive tactics that might be difficult to anticipate or quickly understand. Efforts to mislead or compromise automated systems might create or enable further opportunities to disrupt or damage critical infrastructure or national security networks.”
  3. Nuclear. Under the category of Weapons of Mass Destruction (WMD), Clapper dedicated the most space to concerns about North Korea’s nuclear weapons. However he also highlighted concerns about China’s work to modernize its nuclear weapons, and he argues that Russia violated the INF Treaty when they developed a ground-launch cruise missile.
  4. Genome Editing. Interestingly, gene editing was also listed in the WMD category. As Clapper explains, “Research in genome editing conducted by countries with different regulatory or ethical standards than those of Western countries probably increases the risk of the creation of potentially harmful biological agents or products.” Though he doesn’t explicitly refer to the CRISPR-Cas9 system, he does worry that the low cost and ease-of-use for new technologies will enable “deliberate or unintentional misuse” that could “lead to far reaching economic and national security implications.”

The report, though long, is an easy read, and it’s always worthwhile to understand what issues are motivating the government’s actions.

 

With our new series by Matt Scherer about the legal complications of some of the anticipated AI and autonomous weapons developments, the big news should have been about all of the headlines this week that claimed the federal government now considers AI drivers to be real drivers. Scherer, however, argues this is bad journalism. He provides his interpretation of the NHTSA letter in his recent blog post, “No, the NHTSA did not declare that AIs are legal drivers.”

 

While the headlines of the last few days may have veered toward x-risk, this week also marks the start of the 30th annual Association for the Advancement of Artificial Intelligence (AAAI) Conference. For almost a week, AI researchers will convene in Phoenix to discuss their developments and breakthroughs, and on Saturday, FLI grantees will present some of their research at the AI Ethics and Society Workshop. This is expected to be an event full of hope and excitement about the future!

 

Who’s to Blame (Part 2): What is an “autonomous” weapon?

The following is the second in a series about the limited legal oversight of autonomous weapons. The first segment can be found here.

pe160131

Source: Peanuts by Charles Schulz, January 31, 2016 Via @GoComics

Before turning in greater detail to the legal challenges that autonomous weapon systems (AWSs) will present, it is essential to define what “autonomous” means in the weapons context.  It is, after all, the presence of “autonomy” that will distinguish AWSs from earlier weapon technologies.

Most dictionary definitions of “autonomy” focus on the presence of free will or freedom of action.  These are affirmative definitions, stating what autonomy is.  Some dictionary definitions approach autonomy from a different angle, defining it not by the presence of freedom of action, but rather by the absence of external constraints on that freedom (e.g., “the state of existing or acting separately from others).  This latter approach is more useful in the context of weapon systems, since the existing literature on AWSs seems to use the term “autonomous” as referring to a weapon system’s ability to operate free from human influence and involvement.

Existing AWS commentaries seem to focus on three general methods by which humans can govern an AWS’s actions.  This essay will refer to those methods as direction, monitoring, and control.  A weapon system’s “autonomy” therefore refers to the degree to which the weapon system operates free from human direction, monitoring, and/or control.

Human direction, in this context, refers to the extent to which humans specify the parameters of a weapon system’s operation, from the initial design and programming of the system all the way to battlefield orders regarding the selection of targets and the timing and method of attack.  Monitoring refers to the degree to which humans actively observe and collect information on a weapon system’s operations, whether through a live source such as a video feed or through regular reviews of data regarding a weapon system’s operations.  And control is the degree to which humans can intervene in real time to change what a weapon system is currently doing, such as by actively controlling the system’s physical movement and combat functions or by shutting it down completely if the system malfunctions.  Existing commentaries on “autonomy” in weapon systems all seem to invoke at least one of these three concepts, though they may use different words to refer to those concepts.

The operation of modern military drones such as the MQ-1 Predator and MQ-9 Reaper illustrates how these concepts work in practice.  A Predator or Reaper will not take off, select a target, or launch a missile without direct human input.  Such drones thus are completely dependent on human direction.  While a drone, like a commercial airliner on auto-pilot, may steer itself during non-mission-critical phases of flight, human operators closely monitor the drone throughout each mission both through live video feeds from cameras mounted on the drone and through flight data transmitted by the drone in real time.  And, of course, humans directly (though remotely) control the drone during all mission-critical phases.  Indeed, if the communications link that allows the human operator to control the drone fails, “the drone is programmed to fly autonomously in circles, or return to base, until the link can be reconnected.”  The dominating presence of human direction, monitoring, and control mean that a drone is, in effect, “little more than a super-fancy remote-controlled plane.”  The human-dependent nature of drones makes the task of piloting a drone highly stressful and labor-intensive–so much so that recruitment and retention of drone pilots has proven to be a major challenge for the U.S. Air Force.  That, of course, is part of why militaries might be tempted to design and deploy weapon systems that can direct themselves and/or that do not require constant human monitoring or control.

Direction, monitoring, and control are very much interrelated, with monitoring and control being especially intertwined.  During an active combat mission, human monitoring must be accompanied by human control (and vice versa) to act as an effective check on a weapon system’s operations.  (For that reason, commentators often seem to combine monitoring and control into a single broader concept, such as “oversight” or, my preferred term, “supervision.“)  Likewise, direction is closely related to control; an AWS could not be given new orders (i.e., direction) by a human commander if the AWS was not equipped with mechanisms allowing for human control of its operations.  Such an AWS would only be human-directed in terms of its initial programming.

Particularly strong human direction can also reduce the need for monitoring and control, and vice versa.  A weapon system that is subject to complete human direction in terms of the target, timing, and method of attack (and that has no ability to alter those parameters) has no more autonomy than fire-and-forget guided missiles, a technology that has been available for decades.  And a weapon system subject to constant real-time human monitoring and control may have no more practical autonomy than the remotely piloted drones that are already in widespread military use.

Consequently, the strongest concerns relate to weapon systems that are “fully autonomous”–that is, weapon systems that can select and engage targets without specific orders from a human commander and operate without real-time human supervision.  A 2015 Human Rights Watch (HRW) report, for instance, defines “fully autonomous weapons” as systems that lack meaningful human direction regarding the selection of targets and delivery of force and whose human supervision is so limited that humans are effectively “out-of-the-loop.”  A directive issued by the United States Department of Defense (DoD) in 2009 similarly defines an AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.”

These sources also recognize the existence of weapon systems with lower levels of autonomy.  The DoD directive covers “semi-autonomous weapons systems” that are “intended to only engage individual targets or specific target groups that have been selected by a human operator.”  Such systems must be human-directed in terms of target selection, but could be largely free from human supervision and can even be self-directed with respect to the means and timing of attack.  The same directive discusses “human-supervised” AWSs that, while capable of fully autonomous operation, are “designed to provide human operators with the ability to intervene and terminate engagements.”  HRW similarly distinguishes fully autonomous weapons from those with a human “on the loop,” meaning AWSs that “can select targets and deliver force under the oversight of a human operator who can override the robots’ actions.”


In sum, “autonomy” in weapon systems refers to the degree to which the weapon system operates free from meaningful human direction, monitoring, and control.  Weapon systems that operate without those human checks on their autonomy would raise unique legal issues if those systems’ operations lead to violations of international law.  Those legal challenges will be the subject of the next post in this series.

This segment was originally posted on the blog, Law and AI.

AWOS signatories

The Open Letter Signatories Include:

AI/Robotics Researchers:

 

You need javascript enabled to view the open letter signers.

Other Endorsers:

 

You need javascript enabled to view the open letter signers.


MIRI’s February 2016 Newsletter

This post originally comes from MIRI’s website.

Research updates

General updates

  • Fundraiser and grant successes: MIRI will be working with AI pioneer Stuart Russell and a to-be-determined postdoctoral researcher on the problem of corrigibility, thanks to a $75,000 grant by the Center for Long-Term Cybersecurity.

News and links

X-risk News of the Week: Human Embryo Gene Editing

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

If you keep up with science news at all, then you saw the headlines splashed all over news sources on Monday: The UK has given researchers at the Francis Crick Institute permission to edit the genes of early-stage human embryos.

This is huge news, not only in genetics and biology fields, but for science as a whole. No other researcher has ever been granted permission to perform gene editing on viable human embryos before.

The usual fears of designer babies and slippery slopes popped up, but as most of the general news sources reported, those fears are relatively unwarranted for this research. In fact, this project, with is led by Dr. Kathy Niakan, could arguably be closer to the existential hope side of the spectrum.

Niakan’s objective is to try to understand the first seven days of embryo development, and she’ll do so by using CRISPR to systematically sweep through genes in embryos that were donated from in vitro fertilization (IVF) procedures. While research in mice and other animals has given researchers an idea of the roles different genes play at those early stages of development, there many genes that are uniquely human and can’t be studied in other animals. Many causes of infertility and miscarriages are thought to occur in some of those genes during those very early stages of development, but we can only determine that through this kind of research.

Niakan explained to the BBC, “We would really like to understand the genes needed for a human embryo to develop successfully into a healthy baby. The reason why it is so important is because miscarriages and infertility are extremely common, but they’re not very well understood.”

It may be hard to see how preventing miscarriages could be bad, but this is a controversial research technique under normal circumstances, and Niakan’s request for approval came on the heels of human embryo research that did upset the world.

Last year, outrage swept through the scientific community after scientists in China chose to skip proper approval processes to perform gene-editing research on nonviable human embryos. Many prominent scientists in the field, including FLI’s Scientific Advisory Board Member George Church, responded by calling for a temporary moratorium on using the CRISPR/Cas-9 gene-editing tool in human embryos that would be carried to term.

An important distinction to make here is that Dr. Niakan went through all of the proper approval channels to start her research. Though the UK’s approval process isn’t quite as stringent as that in the US – which prohibits all research on viable embryos – the Human Fertilisation and Embryology Authority, which is the approving body, is still quite strict, insisting, among other things, that the embryos be destroyed after 14 days to ensure they can’t ever be taken to term. The team will also only use embryos that were donated with full consent by the IVF patients

Max Schubert, a doctoral candidate of Dr. George Church’s lab at Harvard, explained that one of the reasons for the temporary moratorium was to give researchers time to study the effects of CRISPR first to understand how effective and safe it truly is. “I think represents the kind of work that you need to do to understand the risks that those scientists are concerned about,” said Schubert.

John Min, also a PhD candidate in Dr. Church’s lab, pointed out that the knowledge we could gain from this research will very likely lead to medications and drugs that can be used to help prevent miscarriages, and that the final treatment could very possibly not involve any type of gene editing at all. This would eliminate, or at least limit, concerns about genetically modified humans.

Said Min, “This is a case that illustrates really well the potential of CRISPR technology … CRISPR will give us the answers to questions much more cheaply and much faster than any other existing technology.”

AI Open Letter Signatories

Who’s to Blame (Part 1): The Legal Vacuum Surrounding Autonomous Weapons

The year is 2020 and intense fighting has once again broken out between Israel and Hamas militants based in Gaza.  In response to a series of rocket attacks, Israel rolls out a new version of its Iron Dome air defense system.  Designed in a huge collaboration involving defense companies headquartered in the United States, Israel, and India, this third generation of the Iron Dome has the capability to act with unprecedented autonomy and has cutting-edge artificial intelligence technology that allows it to analyze a tactical situation by drawing from information gathered by an array of onboard sensors and a variety of external data sources.  Unlike prior generations of the system, the Iron Dome 3.0 is designed not only to intercept and destroy incoming missiles, but also to identify and automatically launch a precise, guided-missile counterattack against the site from where the incoming missile was launched.  The day after the new system is deployed, a missile launched by the system strikes a Gaza hospital far removed from any militant activity, killing scores of Palestinian civilians. Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable.  Unfortunately, no one can agree on who that is…

Much has been made in recent months and years about the risks associated with the emergence of artificial intelligence (AI) technologies and, with it, the automation of tasks that once were the exclusive province of humans.  But legal systems have not yet developed regulations governing the safe development and deployment of AI systems or clear rules governing the assignment of legal responsibility when autonomous AI systems cause harm.  Consequently, it is quite possible that many harms caused by autonomous machines will fall into a legal and regulatory vacuum.  The prospect of autonomous weapons systems (AWSs) throws these issues into especially sharp relief.  AWSs, like all military weapons, are specifically designed to cause harm to human beings—and lethal harm, at that.  But applying the laws of armed conflict to attacks initiated by machines is no simple matter.

The core principles of the laws of armed conflict are straightforward enough.  Those most important to the AWS debate are: attackers must distinguish between civilians and combatants; they must strike only when it is actually necessary to a legitimate military purpose; and they must refrain from an attack if the likely harm to civilians outweighs the military advantage that would be gained.  But what if the attacker is a machine?  How can a machine make the seemingly subjective determination regarding whether an attack is militarily necessary?  Can an AWS be programmed to quantify whether the anticipated harm to civilians would be “proportionate?”  Does the law permit anyone other than a human being to make that kind of determination?  Should it?

But the issue goes even deeper than simply determining whether the laws of war can be encoded into the AI components of an AWS.  Even if everyone agreed that a particular AWS attack constituted a war crime, would our sense of justice be satisfied by “punishing” that machine?  I suspect that most people would answer that question with a resounding “no.”  Human laws demand human accountability.  Unfortunately, as of right now, there are no laws at the national or international level that specifically address whether, when, or how AWSs can be deployed, much less who (if anyone) can be held legally responsible if an AWS commits an act that violates the laws of armed conflict.  This makes it difficult for those laws to have the deterrent effect that they are designed to have; if no one will be held accountable for violating the law, then no one will feel any particular need to ensure compliance with the law.  On the other hand, if there are human(s) with a clear legal responsibility to ensure that an AWS’s operations comply with the laws of war, then horrors such as the hospital bombing described in the intro to this essay would be much less likely to come to fruition.

So how should the legal voids surrounding autonomous weapons–and for that matter, AI in general–be filled?  Over the coming weeks and months, that question–along with the other questions raised in this essay–will be examined in greater detail on the FLI website and on the Law and AI blog.  Stay tuned.

The next segment of this series is scheduled for February 10.

The original post can be found at Law and AI.

Nuclear Warmongering Is Back in Fashion

“We should not be surprised that the Air Force and Navy think about actually employing nuclear weapons rather than keeping them on the shelf and assuming that will be sufficient for deterrence.”

This statement was made by Adam Lowther, a research professor at the Air Force Research Institute, in an article for The National Interest, in which he attempts to convince readers that, as the title says, “America Still Needs Its Nukes.” The comment is strikingly similar to one made by Donald Trump’s spokesperson, who said, “What good does it do to have a good nuclear triad, if you’re afraid to use it?”

Lowther wrote this article as a rebuttal to people like former Defense Secretary William Perry, who have been calling for a reduction of our nuclear arsenal. However, his arguments in support of his pro-nuclear weapons stance — and of his frighteningly pro-nuclear war stance — do not take into account some of the greatest concerns about having such a large nuclear arsenal.

Among the biggest issues is simply that, yes, a nuclear war would be bad. First, it’s nearly impossible launch a nuclear strike without killing innocent civilians. Likely millions of innocent civilians. The two atomic bombs dropped on Japan in WWII killed approximately 100,000 people. Modern hydrogen bombs are 10 to 1000 times more powerful, and a single strategically targeted bomb can kill millions.

Then, we still have to worry about the aftermath. Recent climate models have shown that a full-scale nuclear war might put enough smoke into the upper atmosphere that it could spread around the globe and cause temperatures to plummet by as much 40 degrees Farenheit for up to a decade. People around the world who survived the war – or who weren’t even a part of it – would likely succumb to starvation, hypothermia, disease, or desperate, armed gangs roving for food. But even for a small nuclear war — the kind that could potentially erupt between India and Pakistan — climate models predict that death tolls could reach 1 billion worldwide. Lowther insists that the military spends a significant amount of time studying war games, but how much of that time is spent considering the hundreds of millions of Americans who might die as a result of nuclear winter? Or, as Dr. Alan Robock calls it, self-assured destruction.

A nuclear war could be horrifying, and preventing one should be a constant goal.

This brings up another point that Max Tegmark mentions in the comments section of the article:

“To me, a key question is this, which he never addresses: What is the greatest military threat to the US? A deliberate nuclear attack by Russia/China, or a US-Russia nuclear war starting by accident, as has nearly happened many times in the past? If the latter, then downsizing our nuclear arsenal will make us all safer.”

Does upgrading our nuclear arsenal really make us safer, as Lowther argues? Many people, Perry and Tegmark included, argue that spending $1 trillion to upgrade our nuclear weapons arsenal would actually make us less safe, by inadvertently increasing our chances of nuclear war.

And apparently the scientists behind the Doomsday Clock agree. The Bulletin of Atomic Scientists, who run the Doomsday Clock, announced today that the clock would remain set at three minutes to midnight. In their statement about this decision, they reminded viewers that the clock is a metaphor for the existential risks that pose a threat to the planet. As the Bulletin said,

“Three minutes (to midnight) is too close. Far too close. We, the members of the Science and Security Board of the Bulletin of the Atomic Scientists, want to be clear about our decision not to move the hands of the Doomsday Clock in 2016: That decision is not good news, but an expression of dismay that world leaders continue to fail to focus their efforts and the world’s attention on reducing the extreme danger posed by nuclear weapons and climate change.

“When we call these dangers existential, that is exactly what we mean: They threaten the very existence of civilization and therefore should be the first order of business for leaders who care about their constituents and their countries.”

According to CNN, the Bulletin believes the best way to get the clock to move back would be to spend less on nuclear arms, re-energize the effort for disarmament, and engage more with North Korea.

In what one commenter criticizes as a “bait-and-switch”, Lowther refers to people who make these arguments as “abolitionists,” whom he treats as crusading for a total ban against all nuclear weapons. The truth is more nuanced and interesting. While some groups do indeed call for a ban on nuclear weapons, a large majority of experts are simply advocating for making the world a safer place by: 1) reducing the number of nuclear weapons to a number that will provide sufficient deterrence, and 2) eliminating hair-trigger alert — both in an effort to decrease the chances of an accidental nuclear war. Lowther insists that he and the military don’t maintain a Cold-War mindset because they’ve been so focused on Islamic militants. However, it’s his belief that we should not rule out the possibility of using nuclear weapons that is precisely the Cold-War mindset concerning most people.

As Dr. David Wright from the Union of Concerned Scientists told FLI in an earlier interview:

“Today, nuclear weapons are a liability. They don’t address the key problems that we’re facing, like terrorism … and by having large numbers of them around … you could have a very rapid cataclysm that people are … reeling from forever.”

An Explosion of CRISPR Developments in Just Two Months

 

A Battle Is Waged

A battle over CRISPR is raging through the halls of justice. Almost literally. Two of the key players in the development of the CRISPR technology, Jennifer Doudna and Feng Zhang, have turned to the court system to determine which of them should receive patents for the discovery of the technology. The fight went public in January and was amplified by the release of an article in Cell that many argued presented a one-sided version of the history of CRISPR research. Yet, among CRISPR’s most amazing feats is not its history, but how rapidly progress in the field is accelerating.

Justice_white_background

A CRISPR Explosion

CRISPR, which stands for clustered regularly-interspaced short palindromic repeats, is DNA used in the immune systems of prokaryotes. The system relies on the Cas9 enzyme* and guide RNA’s to find specific, problematic segments of a gene and cut them out. Just three years ago, researchers discovered that this same technique could be applied to humans. As the accuracy, efficiency, and cost-effectiveness of the system became more and more apparent, researchers and pharmaceutical companies jumped on the technique, modifying it, improving it, and testing it on different genetic issues.

Then, in 2015, CRISPR really exploded onto the scene, earning recognition as the top scientific breakthrough of the year by Science Magazine. But not only is the technology not slowing down, it appears to be speeding up. In just two months — from mid-November, 2015 to mid-January, 2016 — ten major CRISPR developments (including the patent war) have grabbed headlines. More importantly, each of these developments could play a crucial role in steering the course of genetics research.

 

Malaria


mosquito_white_background

CRISPR made big headlines in late November of 2015, when researchers announced they could possibly eliminate malaria using the gene-editing technique to start a gene drive in mosquitos. A gene drive occurs when a preferred version of a gene replaces the unwanted version in every case of reproduction, overriding Mendelian genetics, which say that each two representations of a gene should have an equal chance of being passed on to the next generation. Gene drives had long been a theory, but there was no way to practically apply the theory. Then, along came CRISPR. With this new technology, researchers at UC campuses in Irvine and San Diego were able to create an effective gene drive against malaria in mosquitos in their labs. Because mosquitos are known to transmit malaria, a gene drive in the wild could potentially eradicate the disease very quickly. More research is necessary, though, to ensure effectiveness of the technique and to try to prevent any unanticipated negative effects that could occur if we permanently alter the genes of a species.

 

Muscular Dystrophy

A few weeks later, just as 2015 was coming to an end, the New York Times reported that three different groups of researchers announced they’d successfully used CRISPR in mice to treat Duchenne muscular dystrophy (DMD), which, though rare, is among the most common fatal genetic diseases. With DMD, boys have a gene mutation that prevents the creation of a specific protein necessary to keep muscles from deteriorating. Patients are typically in wheel chairs by the time they’re ten, and they rarely live past their twenties due to heart failure. Scientists have often hoped this disease was one that would be well suited for gene therapy, but locating and removing the problematic DNA has proven difficult. In a new effort, researchers loaded CRISPR onto a harmless virus and either injected it into the mouse fetus or the diseased mice to remove the mutated section of the gene. While the DMD mice didn’t achieve the same levels of muscle mass seen in the control mice, they still showed significant improvement.

Writing for Gizmodo, George Dvorsky said, “For the first time ever, scientists have used the CRISPR gene-editing tool to successfully treat a genetic muscle disorder in a living adult mammal. It’s a promising medical breakthrough that could soon lead to human therapies.”

 

Blindness

Only a few days after the DMD story broke, researchers from the Cedars-Sinai Board of Governors Regenerative Medicine Institute announced progress they’d made treating retinitis pigmentosa, an inherited retinal degenerative disease that causes blindness. Using the CRISPR technology on affected rats, the researchers were able to clip the problematic gene, which, according to the abstract in Molecular Therapy, “prevented retinal degeneration and improved visual function.” As Shaomei Wang, one of the scientists involved in the project, explained in the press release, “Our data show that with further development, it may be possible to use this gene-editing technique to treat inherited retinitis pigmentosa in patients.” This is an important step toward using CRISPR  in people, and it follows soon on the heels of news that came out in November from the biotech startup, Editas Medicine, which hopes to use CRISPR in people by 2017 to treat another rare genetic condition, Leber congenital amaurosis, that also causes blindness.

 

Gene Control

January saw another major development as scientists announced that they’d moved beyond using CRISPR to edit genes and were now using the technique to control genes. In this case, the Cas9 enzyme is essentially dead, such that, rather than clipping the gene, it acts as a transport for other molecules that can manipulate the gene in question. This progress was written up in The Atlantic, which explained: “Now, instead of a precise and versatile set of scissors, which can cut any gene you want, you have a precise and versatile delivery system, which can control any gene you want. You don’t just have an editor. You have a stimulant, a muzzle, a dimmer switch, a tracker.” There are countless benefits this could have, from boosting immunity to improving heart muscles after a heart attack. Or perhaps we could finally cure cancer. What better solution to a cell that’s reproducing uncontrollably than a system that can just turn it off?

 

CRISPR Control or Researcher Control

But just how much control do we really have over the CRISPR-Cas9 system once it’s been released into a body? Or, for that matter, how much control do we have over scientists who might want to wield this new power to create the ever-terrifying “designer baby”?

robot_gene_editing

The short answer to the first question is: There will always be risks. But not only is CRISPR-Cas9 incredibly accurate, scientists didn’t accept that as good enough, and they’ve been making it even more accurate. In December, researchers at the Broad Institute published the results of their successful attempt to tweak the RNA guides: they had decreased the likelihood of a mismatch between the gene that the RNA was supposed to guide to and the gene that it actually did guide to. Then, a month later, Nature published research out of Duke University, where scientists had tweaked another section of the Cas9 enzyme, making its cuts even more precise. And this is just a start. Researchers recognize that to successfully use CRISPR-Cas9 in people, it will have to be practically perfect every time.

But that raises the second question: Can we trust all scientists to do what’s right? Unfortunately, this question was asked in response to research out of China in April, in which scientists used CRISPR to attempt to genetically modify non-viable human embryos. While the results proved that we still have a long way to go before the technology will be ready for real human testing, the fact that the research was done at all raised red-flags and shackles among genetics researchers and the press. These questions may have popped up back in March and April of 2015, but the official response came at the start of December when geneticists, biologists and doctors from around the world convened in Washington D. C. for the International Summit on Human Gene Editing. Ultimately, though, the results of the summit were vague, essentially encouraging scientists to proceed with caution, but without any outright bans. However, at this stage of research, the benefits of CRISPR likely outweigh the risks.

 

Big Pharma


biotech_big_pharma

“Proceed with caution” might be just the right advice for pharmaceutical companies that have jumped on the CRISPR bandwagon. With so many amazing possibilities to improve human health, it comes as no surprise that companies are betting, er, investing big money into CRISPR. Hundreds of millions of dollars flooded the biomedical start-up industry throughout 2015, with most going to two main players, Editas Medicine and Intellia Therapeutics. Then, in the middle of December, Bayer announced a joint venture with CRISPR Therapeutics to the tune of $300 million. That’s three major pharmaceutical players hoping to win big with a CRISPR gamble. But just how big of a gamble can such an impressive technology be? Well, every company is required to license the patent for a fee, but right now, because of the legal battles surrounding CRISPR, the original patents (which the companies have already licensed) have been put on hold while the courts try to figure out who is really entitled to them. If the patents change ownership, that could be a big game-changer for all of the biotech companies that have invested in CRISPR.

 

Upcoming Concerns?

On January 14, a British court began reviewing a request by the Frances Crick Institute (FCI) to begin genetically modified research on human embryos. While Britain’s requirements on human embryo testing are more lax than the U.S. — which has a complete ban on genetically modifying any human embryos — the British are still strict, requiring that the embryo be destroyed after the 14th day. The FCI requested a license to begin research on day-old, “spare” IVF embryos to develop a better understanding of why some embryos die at early stages in the womb, in an attempt to decrease the number of miscarriages women have. This germ-line editing research is, of course, now possible because of the recent CRISPR breakthroughs. If this research is successful, The Independent argues, “it could lead to pressure to change the existing law to allow so-called “germ-line” editing of embryos and the birth of GM children.” However, Dr. Kathy Niacin, the lead researcher on the project, insists this will not create a slippery slope to “designer babies.” As she explained to the Independent, ““Because in the UK there are very tight regulations in this area, it would be completely illegal to move in that direction. Our research is in line with what is allowed an in-keeping in the UK since 2009 which is purely for research purposes.”

Woolly Mammoths

Woolly Mammoths! What better way to end an article about how CRISPR can help humanity than with the news that it can also help bring back species that have gone extinct? Ok. Admittedly, the news that George Church wants to resurrect the woolly mammoth has been around since last spring. But the Huffington Post did a feature about his work in December, and it turns out his research has advanced enough now that he predicts the woolly mammoth could return in as little as seven years. Though this won’t be a true woolly mammoth. In fact, it will actually be an Asian elephant boosted by woolly mammoth DNA. Among the goals of the project is to help prevent the extinction of the Asian elephant, and woolly mammoth DNA could help achieve that. The idea is that a hybrid elephant would be able to survive more successfully as the climate changes. If this works, the method could be applied to other plants and animal species to increase stability and decrease extinction rates. As Church tells Huffington Post, “the fact is we’re not bringing back species — strengthening existing species.”

woolly_mammoths

And what more could we ask of genetics research than to strengthen a species?

*Cas9 is only one of the enzymes that can work with the CRISPR system, but researchers have found it to be the most accurate and efficient.

Are Humans Dethroned in Go? AI Experts Weigh In

Today DeepMind announced a major AI breakthrough: they’ve developed software that can defeat a professional human player at the game of Go. This is a feat that has long eluded computers.

Francesca Rossi, a top AI scientist with IBM, told FLI, “AI researchers were waiting for computers to master Go, but we did not expect this to happen so soon. Compared to the chess-playing program DeepBlue, this result addresses what was believed to be a harder problem since in Go there are many more moves.”

Victoria Krakovna, a co-founder of FLI and AI researcher, agreed. “Go is a far more challenging game for computers than chess, with a combinatorial explosion of possible board positions, and many experts were not expecting AI to crack Go in the next decade,” she said.

Go is indeed a complex game, and the number of possible moves is astronomical — while chess has approximately 3580 possible sequences of moves, Go has around 250150. To put that in perspective, 3580 is a number too big to be calculated by a standard, non-graphing calculator, and it exceeds the number of atoms in our observable universe. So it’s no wonder most AI researchers expected close to a decade could pass before an AI system would beat some of the best Go players in the world.

Krakovna explained that DeepMind’s program, AlphaGo, tackled the problem with a combination of supervised learning and reinforcement learning. That is, human experts helped build knowledge of the game into the program, but then the program continued to learn through trial and error as it played against itself.

Berkeley AI professor Stuart Russell, co-author of the standard AI textbook, told us, “The result shows that the combination of deep reinforcement learning and so-called “value networks” that help the program decide which possibilities are worth considering leads to a very powerful system for Go.”

But just how big of a deal is this? For the results published in Nature, AlphaGo beat the European Go champion, Fan Hui, five to zero, however it’s not clear yet how the software would fare against the world champion. Rossi and Russell both weighed in.

Said Rossi, “The innovative techniques developed by DeepMind to achieve this result, that combine new machine learning approaches with search, seem to be general enough to be applicable also to other scenarios, not just Go or playing a game. This makes the result even more important and promising.”

However, as impressed as Russell is by these results, he wasn’t quite sure what to make of the program beating the European champion but not the world champion, given that elite Go is strongly dominated by Asian players such as Lee Se-dol. He explained, “The game had been considered one of the hardest to crack and this is a very impressive result. It’s hard to say yet whether this event is as big as the defeat of Kasparov, who was the human world champion . Fan Hui is an excellent player but the current world champion is considerably stronger. On the other hand, Fan Hui didn’t win a single game, so I cannot predict with confidence that human supremacy will last much longer.”

It turns out, a match between world-champion Se-dol and AlphaGo will take place this coming March. An AI event to look forward to! 

We’re excited to add an edit to this article: Bart Selman, another top AI researcher, followed up with us, sending us his thoughts on this achievement.

Along with Russell and Rossi, Selman is equally impressed by the program’s ability to tackle a game so much more complicated than chess, but he also added, “AlphaGo is such exciting advance because it combines the strength of deep learning to discover subtle patterns in a large collection of board sequences with the latest clever game-space exploration techniques. So, it represents the first clear hybrid of deep learning with an algorithmic search method. Such merging of AI techniques has tremendous potential.

“In terms of novel AI and machine learning, this is a more significant advance than even IBM’s DeepBlue represented. On the other hand, in terms of absolute performance, DeepBlue still rules because it bested the best human player in the world. However, with DeepMind’s new learning based approach, it now seems quite likely that superhuman Go play is within reach. It will be exciting to follow AlphaGo’s upcoming matches.”

A survey of research questions for robust and beneficial AI

A collection of example projects and research questions within each area can be found here.

Research priorities for robust and beneficial AI

A summary of the research areas covered by our grants program can be found here.

若您对推动科技安全和有益发展感兴趣,我们诚意邀请您加入【生命未来研究所】【志愿者】的团队。在这里,你有机会和一群志愿者一起通过写作、翻译、推广活动和与专家/学者交流来催化对人类未来有益处的研究与题案。根据您的兴趣,志愿者有机会学习有关科技风险与安全的相关知识,并收获写作、资料收集和推广方面的经验。富有领导才能的志愿者在未来也有机会成为小组负责人。

欢迎您的加入,有兴趣者请联系lina@futureoflife.org

 

Digital Economy Open Letter

An open letter by a team of economists about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact. (Jun 4, 2015)

Grants Timeline