What President Obama Should Say When He Goes to Hiroshima

The following post was written by David Wright and Lisbeth Gronlund as part of our Huffington Post series on nuclear security. Gronlund and Wright are both Senior Scientists and Co-Directors of the Global Security Program for the Union of Concerned Scientists.

Yesterday the White House announced that President Obama will visit Hiroshima — the first sitting president to do so — when he is in Japan later this month.

He will give a speech at the Hiroshima Peace Memorial Park, which commemorates the atomic bombing by the United States on August 6, 1945.

According to the president’s advisor Ben Rhodes, Obama’s remarks “will reaffirm America’s longstanding commitment — and the President’s personal commitment — to pursue the peace and security of a world without nuclear weapons. As the President has said, the United States has a special responsibility to continue to lead in pursuit of that objective as we are the only nation to have used a nuclear weapon.”

Obama gave his first foreign policy speech in Prague in April 2009, where he talked passionately about ending the threat posed by nuclear weapons. He committed the United States to reducing the role of nuclear weapons in its national security policy and putting an end to Cold War thinking.

A speech in Hiroshima would be a perfect bookend to his Prague speech — but only if he uses the occasion to announce concrete steps he will take before he leaves office. The president must do more than give another passionate speech about nuclear disarmament. The world needs — indeed, is desperate for — concrete action.

Here’s what Mr. Obama should say in Hiroshima:

 

***

 

Thank you for your warm welcome.

I have come to Hiroshima to do several things. First, to recognize those who suffered the humanitarian atrocities of World War II throughout the Pacific region.

Second, to give special recognition to the survivors of the atomic bombings of Hiroshima and Nagasaki — the hibakusha — who have worked tirelessly to make sure those bombings remain the only use of nuclear weapons.

And third, to announce three concrete steps I will take as U.S. commander-in-chief to reduce the risk that nuclear weapons will be used again. These are steps along the path I laid out in Prague in 2009.

First, the United States will cut the number of nuclear warheads deployed on long-range forces below the cap of 1,550 in the New START treaty, down to a level of 1,000. This is a level, based on the Pentagon’s analysis, that I have determined is adequate to maintain U.S. security regardless of what other countries may do.

Second, I am cutting back my administration’s trillion-dollar plan to build a new generation of nuclear warheads, missiles, bombers, and submarines. I am beginning by canceling plans for the new long-range nuclear cruise missile, which I believe is unneeded and destabilizing.

Third, I am taking a step to eliminate one of the ultimate absurdities of our world: The most likely way nuclear weapons would be used again may be by mistake.

How is this possible? Let me explain.

Today the United States and Russia each keep many hundreds of nuclear-armed missiles on prompt-launch status — so-called “hair-trigger alert“ — so they can be launched in a matter of minutes in response to warning of an incoming nuclear attack. The warning would be based on data from satellites and ground-based radars, and would come from a computer.

This practice increases the chance of an accidental or unauthorized launch, or a deliberate launch in response to a false warning. U.S. and Russian presidents would have only about 10 minutes to decide whether the warning of an incoming attack was real or not, before giving the order to launch nuclear-armed missiles in retaliation — weapons that cannot be recalled after launch.

And history has shown again and again that the warning systems are fallible.Human and technical errors have led to mistakes that brought the world far too close to nuclear war. That is simply not acceptable. Accidents happen — they shouldn’t lead to nuclear war.

As a candidate and early in my presidency I recognized the danger and absurdity of this situation. I argued that “we should take our nuclear weapons off hair-trigger alert” because “keeping nuclear weapons ready to launch on a moment’s notice is a dangerous relic of the Cold War. Such policies increase the risk of catastrophic accidents or miscalculation.”

Former secretaries of defense as well as generals who oversaw the U.S. nuclear arsenal agree with me, as do science and faith leaders. In his recent book My Journey at the Nuclear Brink, former Secretary of Defense William Perry writes: “These stories of false alarms have focused a searing awareness of the immense peril we face when in mere minutes our leaders must make life-and-death decisions affecting the whole planet.”

General James Cartwright, former commander of U.S. nuclear forces, argues that cyber threats that did not exist during the Cold War may introduce new system vulnerabilities. A report he chaired last year states that “In some respects the situation was better during the Cold War than it is today. Vulnerability to cyber-attack … is a new wild card in the deck.”

And the absurdity may get even worse: China’s military is urging its government to put Chinese missiles on high alert for the first time. China would have to build a missile warning system, which would be as fallible as the U.S. and Russian ones. The United States should help Chinese leaders understand the danger and folly of such a step.

So today I am following through on my campaign pledge. I am announcing that the United States will take all of its land-based missiles off hair-trigger alert and will eliminate launch-on-warning options from its war plans.

These steps will make America — and the world — safer.

Let me end today as I did in Prague seven years ago: “Let us honor our past by reaching for a better future. Let us bridge our divisions, build upon our hopes, accept our responsibility to leave this world more prosperous and more peaceful than we found it. Together we can do it.”

Passing the Nuclear Baton

The following post was written by Joe Cirincione, President of the Ploughshares Fund, as part of our Huffington Post series on nuclear security.

President Obama entered office with a bold vision, determined to end the Cold War thinking that distorted our nuclear posture. He failed. He has a few more moves he could still make — particularly with his speech in Hiroshima later this month — but the next president will inherit a nuclear mess.

Obama had the right strategy. In his brilliant Prague speech, he identified our three greatest nuclear threats: nuclear terrorism, the spread of nuclear weapons to new states and the dangers from the world’s existing nuclear arsenals. He detailed plans to reduce and eventually eliminate all three, understanding correctly that they all must be tackled at once or progress would be impossible on any.

Progress Thwarting Nuclear Terror

Through his Nuclear Security Summits, Obama created an innovative new tool to raise the threat of nuclear terrorism to the highest level of global leadership and inspire scores of voluntary actions to reduce and secure nuclear materials. But it is, as The New York Times editorialized, “a job half done.” Instead of securing all the material in four years as originally promised, after eight years we still have 1,800 tons of bomb-usable material stored in 24 countries, some of it guarded less securely than we guard our library books.

If a terrorist group could get their hands on just 100 pounds of enriched uranium, they could make a bomb that could destroy a major city. In October of last year, anAP investigation revealed that nuclear smugglers were trying to sell weapons grade uranium to ISIS. Smugglers were overheard on wiretaps as saying that they wanted to find an ISIS buyer because, “they will bomb the Americans.”

More recently, we learned that extremists connected to the attacks in Paris and Belgium had also been videotaping a Belgian nuclear scientist, likely in the hopes of forcing “him to turn over radioactive material, possibly for use in a dirty bomb.”

Obama got us moving in the right direction, but when you are fleeing a forest fire, it is not just a question of direction but also of speed. Can we get to safety before catastrophe engulfs us?

Victory on Iran

His greatest success, by far, has been the agreement with seven nations that blocks Iran’s path to a bomb. This is huge. There are only two nations in the world with nuclear programs that threatened to become new nuclear-armed states: Iran and North Korea. North Korea has already crossed the nuclear Rubicon and we must struggle to see if we can contain that threat and even push them back. Thanks to the Iran agreement however, Iran can now be taken off the list.

For this achievement alone, Obama should get an “A” on his non-proliferation efforts. He is the first president in 24 years not to have a new nuclear nation emerge on his watch.

Bill Clinton saw India and Pakistan explode into the nuclear club in 1998. George W. Bush watched as North Korea set off its first nuclear test in 2006. Barack Obama scratched Iran from contention. Through negotiations, he reduced its program to a fraction of its original size and shrink-wrapped it within the toughest inspection regime ever negotiated. It didn’t cost us a dime. And nobody died. It is, by any measure, a major national security triumph.

Failure to Cut

Unfortunately Obama could not match these gains when it came to the dangers posed by the existing arsenals. The New START Treaty he negotiated with Russia kept alive the intricate inspection procedures previous presidents had created, so that each of the two nuclear superpowers could verify the step-by-step reduction process set in motion by Ronald Reagan and continued by every president since.

That’s where the good news ends. The treaty made only modest reductions to each nation’s nuclear arsenals. The United States and Russia account for almost 95 percent of all the nuclear weapons in the world, with about 7,000 each. The treaty was supposed to be a holding action, until the two could negotiate much deeper reductions. That step never came.

The “Three R’s” blocked the path: Republicans, Russians and Resistance.

First, the Republican Party leadership in Congress fought any attempt at reductions. Though many Republicans supported the treaty, including Colin Powell, George Shultz and Senator Richard Lugar, the entrenched leadership did not want to give a Democratic president a major victory, particularly in the election year of 2010. They politicized national security, putting the interest of the party over the interest of the nation. It took everything Obama had to finally get the treaty approved on the last day of the legislative session in December.

By then, the president’s staff had seen more arms control then they wanted, and the administration turned its attention to other pressing issues. Plans to “immediately and aggressively” pursue Senate approval of the nuclear test ban treaty were shelved and never reconsidered. The Republicans had won.

Worse, when Russia’s Vladimir Putin returned to power, Obama lost the negotiating partner he had had in President Medvedev. Putin linked any future negotiation to a host of other issues, including stopping the deployment of US anti-missile systems in eastern Europe, cuts in conventional forces and limits on long-range conventional strike systems the Russian claimed threatened their strategic nuclear forces. Negotiations never resumed.

Finally, he faced resistance from the nuclear industrial complex, including many of those he himself appointed to implement his policies. Those with a vested financial, organizational or political interest in the thousands of contracts, factories, bases and positions within what is now euphemistically call our “nuclear enterprise” will do anything they can to preserve those dollars, contracts and positions. Many of his appointees merely paid lip-service to the president’s agenda, paying more attention to the demands of the services, or the contractors or their own careers. Our nuclear policy is now less determined by military necessity or strategic doctrine, than by self-interest.

It is difficult to find someone who supports keeping our obsolete Cold War arsenal that is not directly benefiting from, or beholden to, these weapons. In a very strange way, the machines we built are now controlling us.

The Fourth Threat

To make matters worse, under Obama’s watch these three “traditional” nuclear threats have been joined by a fourth: nuclear bankruptcy.

Obama pledged in Prague that as he reduced the role and number of nuclear weapons in U.S. policy, he would maintain a “safe, secure and reliable” arsenal. He increased spending on nuclear weapons, in part to make much needed repairs to a nuclear weapons complex neglected under the Bush administration and, in part, to win New START votes from key senators with nuclear bases and labs in their states.

As Obama’s policy faltered, the nuclear contracts soared. The Pentagon has embarked on the greatest nuclear weapons spending spree in U.S. history. Over the next 30 years the Pentagon is planning to spend at least $1 trillion on new nuclear weapons. Every leg of the U.S. nuclear triad – our fleet of nuclear bombers, ballistic missile submarines, and ICBMs – will be completely replaced by a new generation of weapons that will last well into the later part of this century. It is a new nuclear nightmare.

What Should the Next President Do?

While most of us have forgotten that nuclear weapons still exist today, former Secretary of Defense Bill Perry warns that we “are on the brink of a new nuclear arms race” with all the perils, near-misses and terrors you thought ended with the Cold War. The war is over; the weapons live on.

The next president cannot make the mistake of believing that incremental change in our nuclear policies will be enough to avoid disaster. Or that appointing the same people who failed to make significant change under this administration, will somehow help solve the challenges of the next four years. There is serious work to be done.

We need a new plan to accelerate the elimination of nuclear material. We need a new strategy for North Korea. But most of all, we need a new strategy for America. It starts with us. As long as we keep a stockpile of nuclear weapons far in excess of any conceivable need, how can we convince other nations to give up theirs?

The Joint Chiefs told President Obama that he could safely cut our existing nuclear arsenal and that we would have more than enough weapons to fulfill every military mission. It did not matter what the Russians did. If they cut or did not cut, honored the New START Treaty or cheated. We could still cut down to about 1000 to 1100 strategic weapons and still handle every contingency.

The next president should do that. Not just because it is sound strategic policy – but because it is essential financial policy too. We are going broke. We do not have enough money to pay for all the weapons the Pentagon ordered when they projected ever-rising defense budgets. “There’s a reckoning coming here,” warns Rep. Adam Smith, the ranking Democrat on the House Armed Services Committee. “Do we really need the nuclear power to destroy the world six, seven times?”

The Defense Department admits it does not have the money to pay for these plans. Referring to the massive ‘bow wave‘ of spending set to peak in the 2020s and 2030s, Pentagon Comptroller Mike McCord said “I don’t know of a good way for us to solve this issue.”

In one of more cynical admissions by a trusted Obama advisor, Brian McKeon, the principal undersecretary of defense for policy, said last October, “We’re looking at that big bow wave and wondering how the heck we’re going to pay for it,” And we’re “probably thanking our stars we won’t be here to have to answer the question,” he added with a chuckle.

He may think it’s funny now, but the next president won’t when the stuff hits the fan in 2017. One quick example: The new nuclear submarines the Navy wants will devour half of the Navy’s shipbuilding budget in the next decade. According to the Congressional Research Service, to build 12 of these new subs, “the Navy would need to eliminate… a notional total of 32 other ships, including, notionally, 8 Virginia-class attack submarines, 8 destroyers, and 16 other combatant ships.”

These are ships we use every day around the world on real missions to deal with real threats. They launch strikes against ISIS, patrol the South China Sea, interdict pirates around the horn of Africa, guarantee the safety of international trade lanes, and provide disaster relief around the globe.

The conventional navy’s mission is vital to international security and stability. It is foolish, and dangerous, to cut our conventional forces to pay for weapons built to fight a global thermonuclear war.

Bottom-Up

The next President could do a bottom-up review of our nuclear weapons needs. Don’t ask the Pentagon managers of these programs what they can cut. You know the answer you will get. Take a blank slate and design the force we really need.

Do we truly need to spend $30 billion on a new, stealthy nuclear cruise missile to put on the new nuclear-armed stealth bomber?

Do we truly need to keep 450 intercontinental ballistic missiles, whose chief value is to have the states that house them serve as targets to soak up so many of the enemy’s nuclear warheads that it would “complicate an adversary’s attack plans?” Do Montana and Wyoming and North Dakota really want to erect billboards welcoming visitors to “America’s Nuclear Sponge?”

If President Trump, or Clinton, or Sanders put their trust in the existing bureaucracy, it will likely churn out the same Cold War nuclear gibberish. It will be up to outside experts, scientists, retired military and former diplomats to convince the new president to learn from Obama’s successes and his failures.

Obama had the right vision, the right strategy. He just didn’t have an operational plan to get it all done. It is not that hard, if you have the political will.

Over to you next POTUS.

MIRI March Newsletter

Research updates

General updates

  • MIRI and other Future of Life Institute (FLI) grantees participated in a AAAI workshop on AI safety this month.
  • MIRI researcher Eliezer Yudkowsky discusses Ray Kurzweil, the Bayesian brain hypothesis, and an eclectic mix of other topics in a new interview.
  • Alexei Andreev and Yudkowsky are seeking investors for Arbital, a new technology for explaining difficult topics in economics, mathematics, computer science, and other disciplines. As a demo, Yudkowsky has written a new and improved guide to Bayes’s Rule.

News and links

Secretary William Perry Talks at Google: My Journey at the Nuclear Brink

Former Secretary of Defense William J. Perry was 14 years old when the Japanese attacked Pearl Harbor. As he humorously explained during his Talk at Google this week, in his 14-year-old brain, he was mostly upset because he was worried the war would be over before he could become an Army Air Corps pilot and fight in the war.

Sure enough, the war ended one month before his 18th birthday. He joined the Army Engineers anyway, and was sent to the Army occupation of Japan. That experience quickly altered his perception of war.

“What I saw in Tokyo and Okinawa changed my view altogether about the glamour and the glory of war,” he told the audience at Google.

Tokyo was in ruins — more devastated than Hiroshima — after two years and thousands of firebombs. He then went to Okinawa, which was the site of the last great battle of WWII. The battle had dragged on for nearly three months, during which, 100,000 Japanese had attempted to defend the city in that battle. By the end 90,000 of the Japanese fighters had perished. Perry described his shock upon arriving there to see the city completely demolished — not one building was left standing, and the people who had survived were living in the rubble.

“And then I reflected on Hiroshima,” he said. “This was what could be done with thousand pound bombs. In the case of Tokyo, thousands of them over a two-year period, and with thousands of bombers delivering them. The same result in Hiroshima, in an instant, with one airplane and one bomb. Just one bomb. Even at the tender age of 18, I understood: this changed everything.”

This experience helped shape his understanding of and approach to modern warfare.

Fast forward to the Cuban Missile Crisis. At the start of the crisis, he was working in California for a defense electronics company, but also doing pro-bono consulting work for the government. He was immediately called to Washington D.C. with other analysts to study the data coming in to try to understand the status of the Cuban missiles.

“Every day I went into that analysis center I believed would be my last day on earth. That’s how close we came to a nuclear catastrophe at that time,” he explained to the audience. He later added, “I still believe we avoided that nuclear catastrophe as much by good luck as good management.”

He then spoke of an episode, many years later, when he was overseeing research at the Pentagon. He got a 3 AM call from a general who said his computer was showing a couple hundred nuclear missiles launched from Russia and on their way to the U.S. The general had already determined that it was a false alarm, but he didn’t understand what was wrong with his computer. After two days studying the problem, they figured out that the sergeant responsible for putting in the operating tape had accidentally put in a training tape: the general’s computer was showing realistic simulations.

“It was human error. No matter how complex your systems are, they’re always subject to human error,” Perry said of the event.

He personally experienced two incidents – one of human error and one of system error – which could easily have escalated to the launch of our own nuclear missiles. His explanation for why the people involved were able to recognize these were false alarms was that “nothing bad was going on in the world at that time.” Ever since, he’s wondered what would have happened if these false alarms had occurred during a crisis while the U.S. was on high alert. Would the country have launched a retaliation that could have inadvertently started a nuclear war?

To this day, nuclear systems are still subject to these types of errors. If an ICBM launch officer gets the warning that an attack is imminent s/he will notify the President, who will then have approximately 10 minutes to decide whether or not to launch the missiles before they’re destroyed. That’s 10 minutes for the President to assess the context of all problems in the world combined with the technical information and then decide whether or not to launch nuclear weapons.

In fact, one of Perry’s biggest concerns is that the ICBMs are susceptible to these kinds of false of alarms. He acknowledges that the probability of an accidental nuclear war is very low.

“But,” he says, “why should we accept any probability?”

Adding to that concern is the Obama Administration’s decision to rebuild the nuclear arsenal, which will cost American taxpayers approximately $1 trillion over the next couple of decades. Yet there is very little discussion about this plan in the public arena. As Perry explains, “So far the public is not only not participating in that debate, they’re not even aware of what’s going on.”

Perry is also worried about nuclear terrorism. During the talk, he describes a hypothetical situation in which a terrorist could set off a strategically placed nuclear weapon in a city like Washington D.C. and use that to bring the United States and even the global economy to its knees. He explains that the one reason a scenario like this hasn’t played out yet is because fissile material is so hard to come by.

Throughout the discussion and the Q&A segment, North Korea, India, Pakistan, Iran, and China all came up. While commenting on North Korea, he said:

“The real danger of a missile is not the missile, it’s the fact that it could carry a nuclear warhead.”

That said, of all possible nuclear scenarios, he believes an intentional, regional nuclear war between India and Pakistan could be the most likely.

Perry served as Secretary of Defense from 1994 to 1997, and in more recent years, he’s become a strong advocate for reducing the risks of nuclear weapons. In addition to his many accomplishments and achievements, Perry was awarded the Presidential Medal of Freedom in 1997.

We highly recommend the Talks at Google interview with Perry. We also recommend his new book, My Journey at the Nuclear Brink. You can learn more about his efforts to decrease the risks of nuclear destruction at the William J. Perry Project.

While Perry mentioned two nuclear close calls, there have been many other over the years. We’ve put together a timeline of close calls that we know about – there have likely been many others.

 

 

 

Dr. David Wright on North Korea’s Satellite

Earlier this month, Dr. David Wright, co-director of the Union of Concerned Scientists Global Security Program, wrote two posts about North Korea’s satellite launch. While North Korea isn’t currently thought to pose an existential risk with their weapons, any time nuclear weapons are involved, the situation has the potential to quickly escalate to something that could be catastrophic to the future of humanity. We’re grateful to Wright and the UCS for allowing us to share his posts here.

North Korea is Launching a Rocket Soon: What Do We Know About It?

North Korea has announced that it will launch a rocket sometime in the next two weeks to put a satellite in orbit for the second time. What do we know about it, and how worried should we be?

Fig.

Fig.1. The Unha-3 ready to launch in April 2012. (Source: Sungwon Baik / VOA)

What We Know

North Korea has been developing rockets—both satellite launchers and ballistic missiles—for more than 25 years. Developing rockets requires flight testing them in the atmosphere, and the United States has satellite-based sensors and ground-based radars that allow it to detect flight testing essentially worldwide. So despite North Korea being highly secretive, it can’t hide such tests, and we know what rockets it has flight tested.

North Korea’s military has short-range missiles that can reach most of South Korea, and a longer range missile—called Nodong in the West—that can reach parts of Japan. But it has yet to flight test any military missiles that can reach targets at a distance of greater than about 1,500 kilometers.

(It has two other ballistic missile designs—called the Musudan and KN-08 in the West—that it has exhibited in military parades on several occasions over the past few years, but has never flight tested. So we don’t know what their state of development is, but they can’t be considered operational without flight testing.)

North Korea’s Satellite launcher

North Korea has attempted 5 satellite launches, starting in 1998, with only one success—in December 2012. While that launch put a small satellite into space, the satellite was apparently tumbling and North Korea was never able to communicate with it.

The rocket that launched the satellite in 2012 is called the Unha-3 (Galaxy-3) (Fig. 1). North Korea has announced locations of the splashdown zones for its upcoming launch, where the rocket stages will fall into the sea; since these are very similar to the locations of the zones for its 2012 launch, that suggests the launcher will also be very similar (Fig. 2).

Fig. Fig. 2. The planned trajectory of the upcoming launch. (Source: D Wright in Google Earth)

Fig. 2. The planned trajectory of the upcoming launch. (Source: D Wright in Google Earth)

We know a lot about the Unha-3 from analyzing previous launches, especially after South Korea fished parts of the rocket out of the sea after the 2012 launch. It is about 30 m tall, has a launch mass of about 90 tons, and consists of 3 stages that use liquid fuel. A key point is that the two large lower stages rely on 1960s-era

Scud-type engines and fuel, rather than the more advanced engines and fuel that countries such as  Russia and China use. This is an important limitation on the capability of the rocket and suggests North Korea does not have access to, or has not mastered, more advanced technology.

(Some believe North Korea may have purchased a number of these more advanced engines from the Soviet Union. But it has never flight tested that technology, even in shorter range missiles.)

Because large rockets are highly complex technical systems, they are prone to failure. Just because North Korea was able to get everything to work in 2012, allowing it to orbit a satellite, that says very little about the reliability of the launcher, so it is unclear what the probability of a second successful launch is.

The Satellite

The satellite North Korea launched in 2012— the  Kawngmyongsong-3, or “Bright Star 3”—is likely similar in size and capability (with a mass of about 100 kg) to the current satellite (also called Kawngmyongsong ). The satellite is not designed to do much, since the goal of early satellite launches is learning to communicate with the satellite. It may send back photos from a small camera on board, but these would be too low resolution (probably hundreds of meters) to be useful for spying.

In 2012, North Korea launched its satellite into a “sun-synchronous orbit” (with an inclination of 97.4 degrees), which is an orbit commonly used for satellites that monitor the earth, such as for environmental monitoring. Its orbital altitude was about 550 km, which is twice as high as the Space Station, but lower than most satellites, which sit in higher orbits since atmospheric drag at low altitudes will slow a satellite and cause it to fall from orbit sooner. For North Korea, the altitude was limited by the capability of its launcher. We expect a similar orbit this time, although if the launcher has been modified to carry somewhat more fuel it might be able to carry the satellite to a higher altitude.

The Launch Site and Flight Path

The launch will take place from the Sohae site near the western coast of North Korea (Fig. 2). It would be most efficient to launch due east so that the rocket gains speed from the rotation of the earth. North Korea launched its early flights in that direction but now launches south to avoid overflying Japan—threading the needle between South Korea, China, and the Philippines.

North Korea has modified the Sohae launch site since the 2012 launch. It has increased the height of the gantry that holds the rocket before launch, so that it can accommodate taller rockets, but I expect that extra height will not be needed for this rocket. It has also constructed a building on the launch pad that houses the rocket while it is being prepared for launch (which is a standard feature of modern launch sites around the world). This means we will not be able to watch the detailed launch preparations, which gave indications of the timing of the launch in 2012.

Satellite launch or ballistic missile?

So, is this really an attempt to launch a satellite, or could it be a ballistic missile launch in disguise? Can you tell the difference?

Fig. 3. Trajectories for a long-range ballistic missile (red) and Unha-3 satellite launch (blue).

Fig. 3. Trajectories for a long-range ballistic missile (red) and Unha-3 satellite launch (blue).

The U.S. will likely have lots of sensors—on satellites, in the air, and on the ground and sea—watching the launch, and it will be able to quickly tell whether or not it is really a satellite launch because the trajectory of a satellite launch and ballistic missile are very different.

Figure 3 shows the early part of the trajectory of a typical liquid-fueled ballistic missile (ICBM) with a range of 12,000 km (red) and the Unha-3 launch trajectory from 2012 (blue). They differ in shape and in the length of time the rocket engines burn. In this example, the ICBM engines burn for 300 seconds and the Unha-3 engines burn for nearly twice that long. The ICBM gets up to high speed much faster and then goes much higher.

Interestingly, the Unha-3’s longer burn time means that its upper stages have been designed for use in a satellite launcher, rather than a ballistic missile. So this rocket looks more like a satellite launcher than a ballistic missile.

Long-Range Missile Capability?

Of course, North Korea can still learn a lot from satellite launches about the technology it can use to build a ballistic missile, since the two types of rockets use the same basic technology. That is the source of the concern about these launches.

The range of a missile is based on the technology used and other factors. Whether theUnha-3 could carry a nuclear warhead depends in part on how heavy a North Korea nuclear weapon is, which is a topic of ongoing debate. If the Unha were modified to carry a 1,000 kg warhead rather than a light satellite, the missile could have enough range to reach Alaska and possibly Hawaii, but might not be able to reach the continental U.S. (Fig. 4). If instead North Korea could reduce the warhead mass to around 500 kg, the missile would likely be able to reach large parts of the continental U.S.

North Korea has not flight tested a ballistic missile version of the Unha or a reentry heat shield that would be needed to protect the warhead as it reentered the atmosphere. Because of its large size, such a missile is unlikely to be mobile, and assembling and fueling it at the launch site would be difficult to hide. Its accuracy would likely be many kilometers.

Fig. 4: Distances from North Korea. (Source: D Wright in Google Earth)

Fig. 4: Distances from North Korea. (Source: D Wright in Google Earth)

The bottom line is that North Korea is developing the technology it could use to build a ballistic missile with intercontinental range. Today it is not clear that it has a system capable of doing so or a nuclear weapon that is small enough to be delivered on it. It has shown, however, the capability to continue to make progress on both fronts.

The U.S. approach to dealing with North Korea in recent years through continued sanctions has not been effective in stopping this progress. It’s time for the U.S. to try a different approach, including direct U.S.-Korean talks.

North Korea Successfully Puts Its Second Satellite in Orbit

North Korea launched earlier than expected, and successfully placed its second satellite into orbit.

The launch took place at 7:29 pm EST Saturday, Feb. 6, U.S. time, which was 8:59 am local time on Sunday in North Korea. It originally said its launch window would not start until Feb. 8. Apparently the rocket was ready and the weather was good for a launch.

The U.S. office that tracks objects in space, the Joint Space Operations Center (JSPOC), announced a couple hours later that it was tracking two objects in orbit—the satellite and the third stage of the launcher. The satellite was in a nearly circular orbit (466 x 501 km). The final stage maneuvered to put it in a nearly polar, sun-synchronous orbit, with an inclination of 97.5 degrees.

Because the satellite orbit and other details of the launch were similar to those of North Korea’s last launch, in December 2012, this implies that the launch vehicle was also very similar.

This post from December 2012 allows you to see the launch trajectory in 3D using Google Earth.

South Korea is reporting that after the first stage burned out and was dropped from the rocket, it exploded before reaching the sea. This may have been intended to prevent it from being recovered and studied, as was the first stage of its December 2012 launch.

The satellite, called the Kwangmyongsong-4, is likely very similar to the satellite launched three years ago. It will likely not be known for several days whether, unlike the 2012 satellite, it can stop tumbling in orbit and communicate with the ground. It is apparently intended to stay in orbit for about 4 years.

If it can communicate with the Kwangmyongsong-4, North Korea will learn about operating a satellite in space. Even if not, it gained experience with launching and learned more about the reliability of its rocket systems.

For more information about the launch, see my earlier post.

Note added: Feb. 7, 1:00 am

The two orbiting objects, the satellite and the third-stage rocket body, show up in the NORAD catalog of space objects as numbers 41332 for the satellite and 41333 for the rocket body. (Thanks to Jonathan McDowell for supplying these.)

MIRI’s February 2016 Newsletter

This post originally comes from MIRI’s website.

Research updates

General updates

  • Fundraiser and grant successes: MIRI will be working with AI pioneer Stuart Russell and a to-be-determined postdoctoral researcher on the problem of corrigibility, thanks to a $75,000 grant by the Center for Long-Term Cybersecurity.

News and links

Predicting the Future (of Life)

It’s often said that the future is unpredictable. Of course, that’s not really true. With extremely high confidence, we can predict that the sun will rise in Santa Cruz, California at 7:12 am local time on Jan 30, 2016. We know the next total solar eclipse over the U.S. will be August 14, 2017, and we also know there will be one June 25, 2522. Read more

Why MIRI Matters, and Other MIRI News

The Machine Intelligence Research Institute (MIRI) just completed its most recent round of fundraising, and with that Jed McCaleb wrote a brief post explaining why MIRI and their AI research is so important. You can find a copy of that message below, followed by MIRI’s January newsletter, which was put together by Rob Bensinger.

Jed McCaleb on Why MIRI Matters

A few months ago, several leaders in the scientific community signed an open letter pushing for oversight into the research and development of artificial intelligence, in order to mitigate the risks and ensure the societal benefit of the advanced technology. Researchers largely agree that AI is likely to begin outperforming humans on most cognitive tasks in this century.

Similarly, I believe we’ll see the promise of human-level AI come to fruition much sooner than we’ve fathomed. Its effects will likely be transformational — for the better if it is used to help improve the human condition, or for the worse if it is used incorrectly.

As AI agents become more capable, it becomes more important to analyze and verify their decisions and goals. MIRI’s focus is on how we can create highly reliable agents that can learn human values and the overarching need for better decision-making processes that power these new technologies.

The past few years has seen a vibrant and growing AI research community. As the space continues to flourish, the need for collaboration will continue to grow as well. Organizations like MIRI that are dedicated to security and safety engineering help fill this need. And, as a nonprofit, its research is free from profit obligations. This independence in research is important because it will lead to safer and more neutral results.

By supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good. For humanity’s benefit, we need to guarantee that AI systems can reliably pursue goals that are aligned with society’s human values. If organizations like MIRI are able to help engineer this level of technological advancement and awareness in AI systems, imagine the endless possibilities of how it can help improve our world. It’s critical that we put the infrastructure in place in order to ensure that AI will be used to make the lives of people better. This is why I’ve donated to MIRI, and why I believe it’s a worthy cause that you should consider as well.

January 2016 Newsletter

Research updates

General updates

News and links

Santa, Mistakes, and Nuclear War

Written by: , physicist & co-director, Global Security | December 14, 2015, 9:34 am EST

On December 1, the U.S. military started its annual tracking of Santa’s flight from the North Pole.

Really.

NORAD—the North Atlantic Aerospace Defense Command—is not known for its sense of humor. Its mission is deadly serious: to alert authorities about an aircraft or missile attack on North America. In the event of a nuclear missile attack, NORAD’s job is to detect it, analyze it, and provide the information the president needs to decide whether to launch U.S. nuclear weapons in response.

So what is it doing tracking Santa?

This off-mission public service stems from a series of mistakes and coincidences so unlikely they read like fiction.

The original Sears ad. Note that it says “Kiddies Be Sure and Dial the Correct Number” (Source: NORAD)

It started innocently enough: A 1955 Sears Christmas ad in a Colorado Springs newspaper featured Santa telling kids to call him “any time day or night” and gave a number for his “private phone.”

But due to a typo in the phone number, calls were routed to a top secret red phone at nearby Ent Air Force Base, home of the warning center that became NORAD.

Maybe two people in the world had this phone number—until then. The supervisor on duty that night, Col. Harry Shoup, was not amused when the red phone began to ring off the hook. A no-nonsense military officer, he took his job seriously. And so his men were shocked when, after learning what had happened, Shoup began answering the phone with “ho-ho-ho” and inquiring about the caller’s behavior over the previous 12 months—and then tasked his men to answer the phone the same way.

That Christmas Eve, Shoup shocked his staff yet again. He realized that NORAD’s specialty was, in fact, tracking objects flying toward the United States. So he picked up the phone and called a local radio station to tell them that the world’s finest warning sensors had just picked up a sleigh flying in from the North Pole. A tradition was born.

This uplifting occasion was not the only time things have gone awry at NORAD, but other incidents have been more heart-stopping than heartwarming.

False Warning of Nuclear Attack

For example, in 1979, NORAD’s computer screens lit up showing an all-out Soviet nuclear attack bearing down on the United States. The missiles would take less than 25 minutes to reach their targets.

The military immediately began preparing to launch a retaliatory attack. Nuclear bomber crews were dispatched to their planes. And the crews manning U.S. missiles were ready: The missiles were on 24/7 hair-trigger alert so they could be launched within minutes.

NORAD officers knew they would have only minutes to sort out what was happening, giving the president about 10 minutes to make a launch decision.

Fortunately, it was a time of reduced U.S.-Soviet tensions, so the officers were skeptical about the warning. They also failed to get confirmation from U.S. radar sites that there was a missile attack. They soon discovered that a technician had mistakenly inserted a training tape simulating a large Soviet attack into a NORAD computer. U.S. nuclear forces stood down, averting a nuclear war.

But things could have gone much differently. Within months, tensions between the two superpowers spiked when the Soviets invaded Afghanistan and relations continued to sour through the first Reagan term. Had communication systems been down or U.S. radars detected unrelated missile launches, the situation could have been much more serious.

President Obama: End Hair-Trigger Alert

Since 1979 there have been additional hair-raising incidents and false warnings due to a variety of technical and human errors in both the United States and Russia. Regardless, both countries still keep hundreds of missiles on hair-trigger alert to give their presidents the option of launching them quickly on warning of an attack, increasing the risk that a false alarm could lead to an accidental war. And that risk is significant. Indeed, some retired high-level military officers say an accident or a mistake would be the most likely cause of a nuclear war today.

President Obama understands this risk. Early in his presidency he called for taking U.S. missiles off hair-trigger alert. He has the authority to do so, but has apparently deferred to Cold War holdouts in the Pentagon.

Growing tensions between the United States and Russia now make taking missiles off hair-trigger alert even more urgent. It is during times of crisis when miscalculations and misunderstandings are most likely to occur.

As Col. Shoup and other NORAD officers learned repeatedly, unexpected things happen. They shouldn’t lead to nuclear war.

The best Christmas present President Obama could give to the country this year would be to take U.S. missiles off hair-trigger alert.

Co-written by David Wright and Lisbeth Gronlund. Featured Photo by Bart Fields.

The original version of this article can be found here.

GCRI December News Summary

The following is the December news summary for the Global Catastrophic Risk Institute, written by .
It was originally published at the Global Catastrophic Risk Institute. Please sign up for the GCRI newsletter.

640px-ChineseCoalPower_optChinese power plant image courtesy of Tobias Brox under a Creative Commons Attribution-ShareAlike 3.0 Unported license (the image has been cropped)

Turkish F-16s shot down a Russian Su-24 fighter-bomber near the border between Turkey and Syria. Some reports indicate that the Russian plane’s pilots were shot and possibly killed as they parachuted from their damaged plane. It was the first time a NATO member shot down a Russian military plane since the end of the Cold War. Turkey claimed the Russian plane violated its airspace for five minutes and was shot down only after ignoring ten warnings. Russia said that its plane never entered Turkish airspace. The US military confirmed Turkey’s claim that the Russian plane did receive ten warnings without apparently responding. But Der Spiegel said that both analysis of the flight path of the plane provided by the Turkish military and NATO sources indicate the plane was inside Turkish airspace for only a few seconds. Turkey had already accused Russian planes of violating its airspace on a number of different occasions. Russia formally apologized to Turkey in October when another Russian plane crossed into Turkish airspace. Turkey has also complained about Russian attacks on Syrian villages inhabited by Turkmen, an ethnic group with cultural connections with Turkey. Russian President Vladimir Putin called the incident a “stab in the back”. US President Barack Obama said Turkey had the right to defend itself but called for all sides to de-escalate the situation. Turkey called for an emergency NATO meeting to discuss the incident. Max Fisher noted in Vox that an incident between NATO and Russian forces near the Syrian border like this probably would not escalate, because neither side is likely to mistake it as the beginning of a real attack; if a Russian plane were shot down in the Baltics it would probably be much more dangerous.

Russia launched a military satellite thought be the first part of a new system designed to provide early warning of ballistic missile launches. Russia’s last early-warning satellite failed in 2014. President Putin announced earlier in the month that Russia planned to deploy weapons that are “capable of penetrating any missile defenses”. Russia has objected to the US missile defense system, which Russia says is intended to render Russia’s nuclear deterrent ineffective. The US says that its missile defenses are designed to protect against limited attacks from countries like Iran and North Korea and could not protect the US or its allies against a Russian strike. At a meeting on the state of the Russian defense industry, Russian cameras captured what appeared to be a page in a briefing book outlining the development of underwater drone called “Status-6” designed to deliver a nuclear weapon to port cities. The system would ostensibly be intended to maximize nuclear fallout in order to inflict “unacceptable damage to a country’s territory by creating areas of wide radioactive contamination that would be unsuitable for military, economic, or other activity for long periods of time”. Steven Pifer argued that Russia deliberately leaked the weapon design for domestic political purposes, but that it would actually be of limited strategic value and might not be something Russia actually intends to build.

According to revised official data, China has been underestimating its coal use since 2000. The new data show that China has been emitting as much as 17% more greenhouse gases from coal than was previously disclosed. China was already the largest emitter of greenhouse gases before the revision. China’s emissions were revised upward by an amount equivalent to the emissions of the entire German economy. The new data will not change scientists’ estimates of the amount of carbon dioxide in the atmosphere, which is measured directly. But it may force scientists to revise their estimate of how much carbon is being absorbed by “carbon sinks” like forests and oceans. “We have known for some time that China was underreporting coal consumption,” the Sierra Club’s Nicole Ghio told ThinkProgress. “The fact that the Chinese government is now revising the numbers to more accurately reflect the real consumption is a good thing.”

A new World Bank report found that climate change could cause more than 100 million people to fall into poverty by 2030 by increasing the spread of diseases and interfering with agriculture. The report cited studies showing that climate change could cause crop yields to fall by 5% and increase the number of people at risk for malaria by 150 million by 2030. The report calls for proactive climate adaptation measures, like building dikes and drainage systems to manage flooding and the cultivation of climate-resistant crops and livestock. But the report said that ultimately only efforts to reduce global emissions will protect the world’s poor from the impact of climate change.

Three new cases of Ebola were confirmed in a suburb of Monrovia, more than a month after Liberia was declared free of the disease for the second time. Investigators have not determined how the index patient, a 15-year-old boy, contracted the disease. The fact that the boy’s mother tested positive for high levels of Ebola antibodies raises the possibility that the disease could be spreading through undocumented or mildly symptomatic cases. Studies show that the virus can remain in bodily fluids for as many as nine months in survivors. World Health Organization (WHO) Special Representative for the Ebola Response Bruce Aylward said that flare-ups of Ebola after countries have been declared free of the disease should be treated as rare but inevitable. Dan Kelly, a doctor who advises Partners in Health in Sierra Leone, said that “if we have learned anything in this epidemic, it’s that 42 days is not adequate to declare the end of human-to-human transmission.”

An independent panel of researchers at Harvard and the London School of Hygiene & Tropical Medicine looking into the response to the Ebola outbreak called for independent oversight of WHO. In September, an AP investigation found that senior WHO officials were reluctant to declare Ebola a health emergency for political and economic reasons. The panel’s report called for the creation of a dedicated outbreak response center and a “politically protected” committee with the authority to declare public health emergencies. Suerie Moon, a public health researcher at Harvard who worked on the report, said that “The WHO is too important to fail”.

Researchers created a hybrid version of a bat coronavirus related to the virus that causes severe acute respiratory syndrome (SARS) that is capable of infecting human airway cells. The virus is not the first bat coronavirus known to be capable of binding to key receptors on human airway cells, but the results suggest that bat coronaviruses may be more of a danger to humans than previously believed. In 2014, the US asked researchers to suspend “gain-of-function” research making certain viruses more deadly or transmissible while the National Science Advisory Board for Biosecurity and the National Research Council assess the risks. The bat coronavirus research had already started and was allowed to continue when the moratorium was called. Critics of gain-of-function research worry that what we learn from it does not justify the risk of creating dangerous new viruses. Rutgers molecular biologist Richard Ebright told Nature that in his opinion “the only impact of this work is the creation, in a lab, of a new, non-natural risk”.

A study in The Journal of Volcanology and Geothermal Research finds that supervolcanoes may erupt only when they are triggered by something external like an earthquake or faults in the structure of the surrounding rock. Another recent paper in Science argued that the increase in volcanic activity in the Deccan Traps that may have contributed to the Cretaceous-Paleogene extinction event 65 million years ago could have been triggered by the asteroid or comet that hit the Earth around that time. The Journal of Volcanology and Geothermal Research paper goes against the prevailing theory that supervolcanoes erupt when the internal pressure within a magma chamber builds to the point that it causes an explosion. But the study’s model suggests that the buoyancy of the magma may not actually put much pressure on the magma chamber. Lead author Patricia Gregg said there is also not much evidence of pressure build up at supervolcano sites. “If we want to monitor supervolcanoes to determine if one is progressing toward eruption, we need better understanding of what triggers a supereruption,” Gregg said. “It’s very likely that supereruptions must be triggered by an external mechanism and not an internal mechanism, which makes them very different from the typical, smaller volcanoes that we monitor.”

A Nature Communications paper argued that the Earth was hit by massive solar storms at least twice in the first millennium A.D. Scientists suspect that the spike in the concentration of carbon-14 in the atmosphere around the years 774 and 993 was due to some kind of surge of extraterrestrial radiation. The recent paper argues that new measurements of the concentration of isotopes of beryllium and chlorine from Arctic and Antarctic ice cores indicate that the radiation that hit the Earth was from solar storms at least five times larger than any solar storm scientists have measured. These storms would have to have been even stronger the 1859 “Carrington Event” which interfered with telegraph systems and created auroras over large parts of the planet, but did not notably increase the concentration of carbon-14 in the atmosphere. A solar storm of that large could blow out transformers and damage electrical systems around the world.

This news summary was put together in collaboration with Anthropocene. Thanks to Tony Barrett, Seth Baum, Kaitlin Butler, and Grant Wilson for help compiling the news.

MIRI’s December Newsletter Is Live!

Research updates

General updates

News and links

The Future of Humanity Institute Is Hiring!

Exciting news from FHI:

The Future of Humanity Institute at the University of Oxford invites applications for four postdoctoral research positions. We seek outstanding applicants with backgrounds that could include computer science, mathematics, economics, technology policy, and/or philosophy.

The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.

1. Research Fellow – AI – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121242). We are seeking expertise in the technical aspects of AI safety, including a solid understanding of present-day academic and industrial research frontiers, machine learning development, and knowledge of academic and industry stakeholders and groups. The fellow is expected to have the knowledge and skills to advance the state of the art in proposed solutions to the “control problem.” This person should have a technical background, for example, in computer science, mathematics, or statistics. Candidates with a very strong machine learning or mathematics background are encouraged to apply even if they do not have experience with AI safety topics, assuming they are willing to switch to this subfield. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1M11RbY.

2. Research Fellow – AI Policy – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121241). We are looking for someone with expertise relevant to assessing the socio-economic and strategic impacts of future technologies, identifying key issues and potential risks, and rigorously analysing policy options for responding to these challenges. This person might have an economics, political science, social science, or risk analysis background. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1OfWd7Q.

3. Research Fellow – AI Strategy – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121168). We are looking for someone with a multidisciplinary science, technology, or philosophy background and with outstanding analytical ability. The post holder will investigate, understand, and analyse the capabilities and plausibility of theoretically feasible but not yet fully developed technologies that could impact AI development, and to relate such analysis to broader strategic and systemic issues. The academic background of the post-holder is unspecified, but could involve, for example, computer science or economics. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1jM5Pic.

4. Research Fellow – ERC UnPrEDICT Programme, Future of Humanity Institute (Vacancy ID# 121313). This Research Fellowship will work on a new European Research Council-funded UnPrEDICT (Uncertainty and Precaution: Ethical Decisions Involving Catastrophic Threats) programme, hosted by the Future of Humanity Institute at the University of Oxford. This is a research position for a strong generalist, and will focus on topics related to existential risk, model uncertainty, the precautionary principle, and other principles for handling technological progress. In particular, this research fellow will help to develop decision procedures for navigating empirical uncertainties related to existential risk, including information hazards and situations where model or structural uncertainty are the dominating form of uncertainty. The research could take a decision-theoretic approach, although this is not strictly necessary. We also expect the candidate to engage with the research on specific existential risks, possibly including developing a framework to evaluate uncertain risks in the context of nuclear weapons, climate risks, dual use biotechnology, and/or the development of future artificial intelligence. The successful candidate must demonstrate evidence of, or the potential for producing, outstanding research in the areas of relevance to the project, the ability to integrate interdisciplinary research in philosophy, mathematics and/or economics, and familiarity with both normative and empirical issues surrounding existential risk. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1HSCKgP.

Alternatively, please visit http://www.fhi.ox.ac.uk/vacancies/ or https://www.recruit.ox.ac.uk/ and search using the above vacancy IDs for more details.

$15 Million Granted by Leverhulme to New AI Research Center at Cambridge University

The University of Cambridge has received a grant for just over $15 Million USD from the Leverhulme Foundation to establish a 10-year Centre focused on the opportunities and challenges posed by AI in the long-term. They provided FLI with the following news release:

About the New Center

Hot on the heels of 80K’s excellent AI risk research career profile, we’re delighted to announce the funding of a new international Leverhulme Centre for the Future of Intelligence (CFI), to be led by Cambridge (Huw Price and Zoubin Ghahramani), with spokes at Oxford (Nick Bostrom), Imperial (Murray Shanahan), and Berkeley (Stuart Russell). The Centre proposal was developed at CSER, but will be a stand-alone centre, albeit collaborating extensively with CSER and with the Strategic AI Research Centre (an Oxford-Cambridge collaboration recently funded by the Future of Life Institute’s AI safety grants program). We also hope for extensive collaboration with the Future of Life Institute.

Building on the “Puerto Rico Agenda” from the Future of Life Institute’s landmark January 2015 conference, it will have the long-term safe and beneficial development of AI at its core, but with a broader remit than CSER’s focus on catastrophic AI risk and superintelligence. For example, it will consider some near-term challenges such as lethal autonomous weapons, as well as some of the longer-term philosophical and practical issues surrounding the opportunities and challenges we expect to face, should greater-than-human-level intelligence be developed later this century.

CFI builds on the pioneering work of FHI, FLI and others, along with the generous support of Elon Musk, who helped massively boost this field with his (separate) $10M grants programme in January of this year. One of the most important things this Centre will achieve is in taking a big step towards making this global area of research a long-term one in which the best talents can be expected to have lasting careers – the Centre is funded for a full 10 years, and we will aim to build longer-lasting funding on top of this.

In practical terms, it means that ~10 new postdoc positions will be opening up in this space across academic disciplines and locations (Cambridge, Oxford, Berkeley, Imperial and elsewhere). Our first priority will be to identify and hire a world-class Executive Director, who would start in October. This will be a very influential position over the coming years. Research positions will most likely begin in April 2017.

Between now and then, FHI is hiring for AI safety researchers, CSER will be hiring for an AI policy postdoc in the spring, and MIRI is also hiring. A number of the key researchers in the AI safety community are also organizing a high-level symposium on the impacts and future of AI at the Neural Information Processing Systems conference next week.

 

CFI and the Future of AI Safety Research

Human-level intelligence is familiar in biological ‘hardware’ — it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

Now, thanks to an unprecedented £10 million (~$15 million USD) grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.

The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.

The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University’s Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions.

Dr Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.”

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said: “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks — from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.”

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.”

A version of this news release can also be found on the Cambridge University website and at Eureka Alert.

From the MIRI Blog: “Formalizing Convergent Instrumental Goals”

Tsvi Benson-Tilsen, a MIRI associate and UC Berkeley PhD candidate, has written a paper with contributions from MIRI Executive Director Nate Soares on strategies that will tend to be useful for most possible ends: “Formalizing convergent instrumental goals.” The paper will be presented as a poster at the AAAI-16 AI, Ethics and Society workshop.

Steve Omohundro has argued that AI agents with almost any goal will converge upon a set of “basic drives,” such as resource acquisition, that tend to increase agents’ general influence and freedom of action. This idea, which Nick Bostrom calls the instrumental convergence thesis, has important implications for future progress in AI. It suggests that highly capable decision-making systems may pose critical risks even if they are not programmed with any antisocial goals. Merely by being indifferent to human operators’ goals, such systems can have incentives to manipulate, exploit, or compete with operators.

The new paper serves to add precision to Omohundro and Bostrom’s arguments, while testing the arguments’ applicability in simple settings. Benson-Tilsen and Soares write:

In this paper, we will argue that under a very general set of assumptions, intelligent rational agents will tend to seize all available resources. We do this using a model, described in section 4, that considers an agent taking a sequence of actions which require and potentially produce resources. The theorems proved in section 4 are not mathematically difficult, and for those who find Omohundro’s arguments intuitively obvious, our theorems, too, will seem trivial. This model is not intended to be surprising; rather, the goal is to give a formal notion of “instrumentally convergent goals,” and to demonstrate that this notion captures relevant aspects of Omohundro’s intuitions.

Our model predicts that intelligent rational agents will engage in trade and cooperation, but only so long as the gains from trading and cooperating are higher than the gains available to the agent by taking those resources by force or other means. This model further predicts that agents will not in fact “leave humans alone” unless their utility function places intrinsic utility on the state of human-occupied regions: absent such a utility function, this model shows that powerful agents will have incentives to reshape the space that humans occupy.

Benson-Tilsen and Soares define a universe divided into regions that may change in different ways depending on an agent’s actions. The agent wants to make certain regions enter certain states, and may collect resources from regions to that end. This model can illustrate the idea that highly capable agents nearly always attempt to extract resources from regions they are indifferent to, provided the usefulness of the resources outweighs the extraction cost.

The relevant models are simple, and make few assumptions about the particular architecture of advanced AI systems. This makes it possible to draw some general conclusions about useful lines of safety research even if we’re largely in the dark about how or when highly advanced decision-making systems will be developed. The most obvious way to avoid harmful goals is to incorporate human values into AI systems’ utility functions, a project outlined in “The value learning problem.” Alternatively (or as a supplementary measure), we can attempt to specify highly capable agents that violate Benson-Tilsen and Soares’ assumptions, avoiding dangerous behavior in spite of lacking correct goals. This approach is explored in the paper “Corrigibility.”

 

Find the original post here.

The Superintelligence Control Problem

The following is an excerpt from the Three Areas of Research on the Superintelligence Control Problem, written by Daniel Dewey and highlighted in MIRI’s November newsletter:

What is the superintelligence control problem?

Though there are fundamental limits imposed on the capabilities of intelligent systems by the laws of physics and computational complexity, human brains and societies of human brains are probably far from these limits. It is reasonable to think that ongoing research in AI, machine learning, and computing infrastructure will eventually make it possible to build AI systems that not only equal, but far exceed human capabilities in most domains. Current research on AI and machine learning is at least a few decades from this degree of capability and generality, but it would be surprising if it were not eventually achieved.

Superintelligent systems would be extremely effective at achieving tasks they are set – for example, they would be much more efficient than humans are at interpreting data of all kinds, refining scientific theory, improving technologies, and understanding and predicting complex systems like the global economy and the environment (insofar as this is possible). Recent machine learning progress in natural language, visual understanding, and from-scratch reinforcement learning highlights the potential for AI systems to excel at tasks that have traditionally been difficult to automate. If we use these systems well, they will bring enormous benefits – even human-like performance on many tasks would transform the economy completely, and superhuman performance would extend our capabilities greatly.

However, superintelligent AI systems could also pose risks if they are not designed and used carefully. In pursuing a task, such a system could find plans with side-effects that go against our interests; for example, many tasks could be better achieved by taking control of physical resources that we would prefer to be used in other ways, and superintelligent systems could be very effective at acquiring these resources. If these systems come to wield much more power than we do, we could be left with almost no resources. If a superintelligent AI system is not purposefully built to respect our values, then its actions could lead to global catastrophe or even human extinction, as it neglects our needs in pursuit of its task. The superintelligence control problem is the problem of understanding and managing these risks. Though superintelligent systems are quite unlikely to be possible in the next few decades, further study of the superintelligence control problem seems worthwhile.

There are other sources of risk from superintelligent systems; for example, oppressive governments could use these systems to do violence on a large scale, and the transition to a superintelligent economy could be difficult to navigate. These risks are also worth studying, but seem superficially to be more like the risks caused by artificial intelligence broadly speaking (e.g. risks from autonomous weapons or unemployment), and seem fairly separate from the superintelligence control problem.

Learn more about the three areas of research into this problem by reading the complete article here.

What to think about machines that think

From MIRI:

In January, nearly 200 public intellectuals submitted essays in response to the 2015 Edge.org question, “What Do You Think About Machines That Think?” (available online). The essay prompt began:

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can “really” think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These “AIs”, if they achieve “Superintelligence” (Nick Bostrom), could pose “existential risks” that lead to “Our Final Hour” (Martin Rees). And Stephen Hawking recently made international headlines when he noted “The development of full artificial intelligence could spell the end of the human race.”

But wait! Should we also ask what machines that think, or, “AIs”, might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is “their” society “our” society? Will we, and the AIs, include each other within our respective circles of empathy?

The essays are now out in book form, and serve as a good quick-and-dirty tour of common ideas about smarter-than-human AI. The submissions, however, add up to 541 pages in book form, and MIRI’s focus onde novo AI makes us especially interested in the views of computer professionals. To make it easier to dive into the collection, I’ve collected a shorter list of links — the 32 argumentative essays written by computer scientists and software engineers.1 The resultant list includes three MIRI advisors (Omohundro, Russell, Tallinn) and one MIRI researcher (Yudkowsky).

I’ve excerpted passages from each of the essays below, focusing on discussions of AI motivations and outcomes. None of the excerpts is intended to distill the content of the entire essay, so you’re encouraged to read the full essay if an excerpt interests you.


Anderson, Ross. “He Who Pays the AI Calls the Tune.”2

The coming shock isn’t from machines that think, but machines that use AI to augment our perception.

What’s changing as computers become embedded invisibly everywhere is that we all now leave a digital trail that can be analysed by AI systems. The Cambridge psychologist Michael Kosinski has shown that your race, intelligence, and sexual orientation can be deduced fairly quickly from your behavior on social networks: On average, it takes only four Facebook “likes” to tell whether you’re straight or gay. So whereas in the past gay men could choose whether or not to wear their Out and Proud T-shirt, you just have no idea what you’re wearing anymore. And as AI gets better, you’re mostly wearing your true colors.


Bach, Joscha. “Every Society Gets the AI It Deserves.”

Unlike biological systems, technology scales. The speed of the fastest birds did not turn out to be a limit to airplanes, and artificial minds will be faster, more accurate, more alert, more aware and comprehensive than their human counterparts. AI is going to replace human decision makers, administrators, inventors, engineers, scientists, military strategists, designers, advertisers and of course AI programmers. At this point, Artificial Intelligences can become self-perfecting, and radically outperform human minds in every respect. I do not think that this is going to happen in an instant (in which case it only matters who has got the first one). Before we have generally intelligent, self-perfecting AI, we will see many variants of task specific, non-general AI, to which we can adapt. Obviously, that is already happening.

When generally intelligent machines become feasible, implementing them will be relatively cheap, and every large corporation, every government and every large organisation will find itself forced to build and use them, or be threatened with extinction.

What will happen when AIs take on a mind of their own? Intelligence is a toolbox to reach a given goal, but strictly speaking, it does not entail motives and goals by itself. Human desires for self-preservation, power and experience are the not the result of human intelligence, but of a primate evolution, transported into an age of stimulus amplification, mass-interaction, symbolic gratification and narrative overload. The motives of our artificial minds are (at least initially) going to be those of the organisations, corporations, groups and individuals that make use of their intelligence.


Bongard, Joshua. “Manipulators and Manipulanda.”

Personally, I find the ethical side of thinking machines straightforward: Their danger will correlate exactly with how much leeway we give them in fulfilling the goals we set for them. Machines told to “detect and pull broken widgets from the conveyer belt the best way possible” will be extremely useful, intellectually uninteresting, and will likely destroy more jobs than they will create. Machines instructed to “educate this recently displaced worker (or young person) the best way possible” will create jobs and possibly inspire the next generation. Machines commanded to “survive, reproduce, and improve the best way possible” will give us the most insight into all of the different ways in which entities may think, but will probably give us humans a very short window of time in which to do so. AI researchers and roboticists will, sooner or later, discover how to create all three of these species. Which ones we wish to call into being is up to us all.


Brooks, Rodney A.Mistaking Performance for Competence.”

Now consider deep learning that has caught people’s imaginations over the last year or so. The new versions rely on massive amounts of computer power in server farms, and on very large data sets that did not formerly exist, but critically, they also rely on new scientific innovations.

A well-known particular example of their performance is labeling an image, in English, saying that it is a baby with a stuffed toy. When a person looks at the image that is what they also see. The algorithm has performed very well at labeling the image, and it has performed much better than AI practitioners would have predicted for 2014 performance only five years ago. But the algorithm does not have the full competence that a person who could label that same image would have.

Work is underway to add focus of attention and handling of consistent spatial structure to deep learning. That is the hard work of science and research, and we really have no idea how hard it will be, nor how long it will take, nor whether the whole approach will reach a fatal dead end. It took thirty years to go from backpropagation to deep learning, but along the way many researchers were sure there was no future in backpropagation. They were wrong, but it would not have been surprising if they had been right, as we knew all along that the backpropagation algorithm is not what happens inside people’s heads.

The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded. Misled by suitcase words, people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.


Christian, Brian. “Sorry to Bother You.”

When we stop someone to ask for directions, there is usually an explicit or implicit, “I’m sorry to bring you down to the level of Google temporarily, but my phone is dead, see, and I require a fact.” It’s a breach of etiquette, on a spectrum with asking someone to temporarily serve as a paperweight, or a shelf.

As things stand in the present, there are still a few arenas in which only a human brain will do the trick, in which the relevant information and experience lives only in humans’ brains, and so we have no choice but to trouble those brains when we want something. “How do those latest figures look to you?” “Do you think Smith is bluffing?” “Will Kate like this necklace?” “Does this make me look fat?” “What are the odds?”

These types of questions may well offend in the twenty-second century. They only require a mind—anymind will do, and so we reach for the nearest one.


Dietterich, Thomas G. “How to Prevent an Intelligence Explosion.”

Creating an intelligence explosion requires the recursive execution of four steps. First, a system must have the ability to conduct experiments on the world.

Second, these experiments must discover new simplifying structures that can be exploited to side-step the computational intractability of reasoning.

Third, a system must be able to design and implement new computing mechanisms and new algorithms.

Fourth, a system must be able to grant autonomy and resources to these new computing mechanisms so that they can recursively perform experiments, discover new structures, develop new computing methods, and produce even more powerful “offspring.” I know of no system that has done this.

The first three steps pose no danger of an intelligence chain reaction. It is the fourth step—reproduction with autonomy—that is dangerous. Of course, virtually all “offspring” in step four will fail, just as virtually all new devices and new software do not work the first time. But with sufficient iteration or, equivalently, sufficient reproduction with variation, we cannot rule out the possibility of an intelligence explosion.

I think we must focus on Step 4. We must limit the resources that an automated design and implementation system can give to the devices that it designs. Some have argued that this is hard, because a “devious” system could persuade people to give it more resources. But while such scenarios make for great science fiction, in practice it is easy to limit the resources that a new system is permitted to use. Engineers do this every day when they test new devices and new algorithms.


Draves, Scott. “I See a Symbiosis Developing.”

A lot of ink has been spilled over the coming conflict between human and computer, be it economic doom with jobs lost to automation, or military dystopia teaming with drones. Instead, I see a symbiosis developing. And historically when a new stage of evolution appeared, like eukaryotic cells, or multicellular organisms, or brains, the old system stayed on and the new system was built to work with it, not in place of it.

This is cause for great optimism. If digital computers are an alternative substrate for thinking and consciousness, and digital technology is growing exponentially, then we face an explosion of thinking and awareness.


Gelernter, David. “Why Can’t ‘Being’ or ‘Happiness’ Be Computed?

Happiness is not computable because, being the state of a physical object, it is outside the universe of computation. Computers and software do not create or manipulate physical stuff. (They can cause other, attached machines to do that, but what those attached machines do is not the accomplishment of computers. Robots can fly but computers can’t. Nor is any computer-controlled device guaranteed to make people happy; but that’s another story.) Computers and the mind live in different universes, like pumpkins and Puccini, and are hard to compare whatever one intends to show.


Gershenfeld, Neil. “Really Good Hacks.”

Disruptive technologies start as exponentials, which means the first doublings can appear inconsequential because the total numbers are small. Then there appears to be a revolution when the exponential explodes, along with exaggerated claims and warnings to match, but it’s a straight extrapolation of what’s been apparent on a log plot. That’s around when growth limits usually kick in, the exponential crosses over to a sigmoid, and the extreme hopes and fears disappear.

That’s what we’re now living through with AI. The size of common-sense databases that can be searched, or the number of inference layers that can be trained, or the dimension of feature vectors that can be classified have all been making progress that can appear to be discontinuous to someone who hasn’t been following them.

Asking whether or not they’re dangerous is prudent, as it is for any technology. From steam trains to gunpowder to nuclear power to biotechnology we’ve never not been simultaneously doomed and about to be saved. In each case salvation has lain in the much more interesting details, rather than a simplistic yes/no argument for or against. It ignores the history of both AI and everything else to believe that it will be any different.


Hassabis, Demis; Legg, Shane; Suleyman, Mustafa. “Envoi: A Short Distance Ahead—and Plenty to Be Done.”

ith the very negative portrayals of futuristic artificial intelligence in Hollywood, it is perhaps not surprising that doomsday images are appearing with some frequency in the media. As Peter Norvig aptly put it, “The narrative has changed. It has switched from, ‘Isn’t it terrible that AI is a failure?’ to ‘Isn’t it terrible that AI is a success?’”

As is usually the case, the reality is not so extreme. Yes, this is a wonderful time to be working in artificial intelligence, and like many people we think that this will continue for years to come. The world faces a set of increasingly complex, interdependent and urgent challenges that require ever more sophisticated responses. We’d like to think that successful work in artificial intelligence can contribute by augmenting our collective capacity to extract meaningful insight from data and by helping us to innovate new technologies and processes to address some of our toughest global challenges.

However, in order to realise this vision many difficult technical issues remain to be solved, some of which are long standing challenges that are well known in the field.


Hearst, Marti. “eGaia, a Distributed Technical-Social Mental System.”

We will find ourselves in a world of omniscient instrumentation and automation long before a stand-alone sentient brain is built—if it ever is. Let’s call this world “eGaia” for lack of a better word.

Why won’t a stand-alone sentient brain come sooner? The absolutely amazing progress in spoken language recognition—unthinkable 10 years ago—derives in large part from having access to huge amounts of data and huge amounts of storage and fast networks. The improvements we see in natural language processing are based on mimicking what people do, not understanding or even simulating it. It does not owe to breakthroughs in understanding human cognition or even significantly different algorithms. But eGaia is already partly here, at least in the developed world.


Helbing, Dirk. “An Ecosystem of Ideas.”

If we can’t control intelligent machines on the long run, can we at least build them to act morally? I believe, machines that think will eventually follow ethical principles. However, it might be bad if humans determined them. If they acted according to our principles of self-regarding optimization, we could not overcome crime, conflict, crises, and war. So, if we want such “diseases of today’s society” to be healed, it might be better if we let machines evolve their own, superior ethics.

Intelligent machines would probably learn that it is good to network and cooperate, to decide in other-regarding ways, and to pay attention to systemic outcomes. They would soon learn that diversity is important for innovation, systemic resilience, and collective intelligence.


Hillis, Daniel W.I Think, Therefore AI.”

Like us, the thinking machines we make will be ambitious, hungry for power—both physical and computational—but nuanced with the shadows of evolution. Our thinking machines will be smarter than we are, and the machines they make will be smarter still. But what does that mean? How has it worked so far? We have been building ambitious semi-autonomous constructions for a long time—governments and corporations, NGOs. We designed them all to serve us and to serve the common good, but we are not perfect designers and they have developed goals of their own. Over time the goals of the organization are never exactly aligned with the intentions of the designers.


Kleinberg, Jon; Mullainathan, Sendhil.3We Built Them, But We Don’t Understand Them.”

We programmed them, so we understand each of the individual steps. But a machine takes billions of these steps and produces behaviors—chess moves, movie recommendations, the sensation of a skilled driver steering through the curves of a road—that are not evident from the architecture of the program we wrote.

We’ve made this incomprehensibility easy to overlook. We’ve designed machines to act the way we do: they help drive our cars, fly our airplanes, route our packages, approve our loans, screen our messages, recommend our entertainment, suggest our next potential romantic partners, and enable our doctors to diagnose what ails us. And because they act like us, it would be reasonable to imagine that they think like us too. But the reality is that they don’t think like us at all; at some deep level we don’t even really understand how they’re producing the behavior we observe. This is the essence of their incomprehensibility.

This doesn’t need to be the end of the story; we’re starting to see an interest in building algorithms that are not only powerful but also understandable by their creators. To do this, we may need to seriously rethink our notions of comprehensibility. We might never understand, step-by-step, what our automated systems are doing; but that may be okay. It may be enough that we learn to interact with them as one intelligent entity interacts with another, developing a robust sense for when to trust their recommendations, where to employ them most effectively, and how to help them reach a level of success that we will never achieve on our own.

Until then, however, the incomprehensibility of these systems creates a risk. How do we know when the machine has left its comfort zone and is operating on parts of the problem it’s not good at? The extent of this risk is not easy to quantify, and it is something we must confront as our systems develop. We may eventually have to worry about all-powerful machine intelligence. But first we need to worry about putting machines in charge of decisions that they don’t have the intelligence to make.


Kosko, Bart. “Thinking Machines = Old Algorithms on Faster Computers.”

The real advance has been in the number-crunching power of digital computers. That has come from the steady Moore’s-law doubling of circuit density every two years or so. It has not come from any fundamentally new algorithms. That exponential rise in crunch power lets ordinary looking computers tackle tougher problems of big data and pattern recognition.

The algorithms themselves consist mainly of vast numbers of additions and multiplications. So they are not likely to suddenly wake up one day and take over the world. They will instead get better at learning and recognizing ever richer patterns simply because they add and multiply faster.


Krause, Kai. “An Uncanny Three-Ring Test for Machina sapiens.”

Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases…here in a couple of decades. But it is not all “iterative.” There’s a huge gap between that and the level of conscious understanding that truly deserves to be called Strong, as in “Alive AI.”

The big elusive question: Is consciousness an emergent behaviour? That is, will sufficient complexity in the hardware bring about that sudden jump to self-awareness, all on its own? Or is there some missing ingredient? This is far from obvious; we lack any data, either way. I personally think that consciousness is incredibly more complex than is currently assumed by “the experts”.

The entire scenario of a singular large-scale machine somehow “overtaking” anything at all is laughable. Hollywood ought to be ashamed of itself for continually serving up such simplistic, anthropocentric, and plain dumb contrivances, disregarding basic physics, logic, and common sense.

The real danger, I fear, is much more mundane: Already foreshadowing the ominous truth: AI systems are now licensed to the health industry, Pharma giants, energy multinationals, insurance companies, the military…


Lloyd, Seth. “Shallow Learning.”

The “deep” in deep learning refers to the architecture of the machines doing the learning: they consist of many layers of interlocking logical elements, in analogue to the “deep” layers of interlocking neurons in the brain. It turns out that telling a scrawled 7 from a scrawled 5 is a tough task. Back in the 1980s, the first neural-network based computers balked at this job. At the time, researchers in the field of neural computing told us that if they only had much larger computers and much larger training sets consisting of millions of scrawled digits instead of thousands, then artificial intelligences could turn the trick. Now it is so. Deep learning is informationally broad—it analyzes vast amounts of data—but conceptually shallow. Computers can now tell us what our own neural networks knew all along. But if a supercomputer can direct a hand-written envelope to the right postal code, I say the more power to it.


Martin, Ursula. “Thinking Saltmarshes.”

hat kind of a thinking machine might find its own place in slow conversations over the centuries, mediated by land and water? What qualities would such a machine need to have? Or what if the thinking machine was not replacing any individual entity, but was used as a concept to help understand the combination of human, natural and technological activities that create the sea’s margin, and our response to it? The term “social machine” is currently used to describe endeavours that are purposeful interaction of people and machines—Wikipedia and the like—so the “landscape machine” perhaps.


Norvig, Peter. “Design Machines to Deal with the World’s Complexity.”

In 1965 I. J. Good wrote “an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.” I think this fetishizes “intelligence” as a monolithic superpower, and I think reality is more nuanced. The smartest person is not always the most successful; the wisest policies are not always the ones adopted. Recently I spent an hour reading the news about the middle east, and thinking. I didn’t come up with a solution. Now imagine a hypothetical “Speed Superintelligence” (as described by Nick Bostrom) that could think as well as any human but a thousand times faster. I’m pretty sure it also would have been unable to come up with a solution. I also know from computational complexity theory that there are a wide class of problems that are completely resistant to intelligence, in the sense that, no matter how clever you are, you won’t have enough computing power. So there are some problems where intelligence (or computing power) just doesn’t help.

But of course, there are many problems where intelligence does help. If I want to predict the motions of a billion stars in a galaxy, I would certainly appreciate the help of a computer. Computers are tools. They are tools of our design that fit into niches to solve problems in societal mechanisms of our design. Getting this right is difficult, but it is difficult mostly because the world is complex; adding AI to the mix doesn’t fundamentally change things. I suggest being careful with our mechanism design and using the best tools for the job regardless of whether the tool has the label “AI” on it or not.


Omohundro, Steve. “A Turning Point in Artificial Intelligence.”

A study of the likely behavior of these systems by studying approximately rational systems undergoing repeated self-improvement shows that they tend to exhibit a set of natural subgoals called “rational drives” which contribute to the performance of their primary goals. Most systems will better meet their goals by preventing themselves from being turned off, by acquiring more computational power, by creating multiple copies of themselves, and by acquiring greater financial resources. They are likely to pursue these drives in harmful anti-social ways unless they are carefully designed to incorporate human ethical values.


O’Reilly, Tim. “What If We’re the Microbiome of the Silicon AI?

It is now recognized that without our microbiome, we would cease to live. Perhaps the global AI has the same characteristics—not an independent entity, but a symbiosis with the human consciousnesses living within it.

Following this logic, we might conclude that there is a primitive global brain, consisting not just of all connected devices, but also the connected humans using those devices. The senses of that global brain are the cameras, microphones, keyboards, location sensors of every computer, smartphone, and “Internet of Things” device; the thoughts of that global brain are the collective output of millions of individual contributing cells.


Pentland, Alex. “The Global Artificial Intelligence Is Here.”

The Global Artificial Intelligence (GAI) has already been born. Its eyes and ears are the digital devices all around us: credit cards, land use satellites, cell phones, and of course the pecking of billions of people using the Web.

For humanity as a whole to first achieve and then sustain an honorable quality of life, we need to carefully guide the development of our GAI. Such a GAI might be in the form of a re-engineered United Nations that uses new digital intelligence resources to enable sustainable development. But because existing multinational governance systems have failed so miserably, such an approach may require replacing most of today’s bureaucracies with “artificial intelligence prosthetics”, i.e., digital systems that reliably gather accurate information and ensure that resources are distributed according to plan.

No matter how a new GAI develops, two things are clear. First, without an effective GAI achieving an honorable quality of life for all of humanity seems unlikely. To vote against developing a GAI is to vote for a more violent, sick world. Second, the danger of a GAI comes from concentration of power. We must figure out how to build broadly democratic systems that include both humans and computer intelligences. In my opinion, it is critical that we start building and testing GAIs that both solve humanity’s existential problems and which ensure equality of control and access. Otherwise we may be doomed to a future full of environmental disasters, wars, and needless suffering.


Poggio, Tomaso. “‘Turing+’ Questions.

Since intelligence is a whole set of solutions to independent problems, there’s little reason to fear the sudden appearance of a superhuman machine that thinks, though it’s always better to err on the side of caution. Of course, each of the many technologies that are emerging and will emerge over time in order to solve the different problems of intelligence is likely to be powerful in itself—and therefore potentially dangerous in its use and misuse, as most technologies are.

Thus, as it is the case in other parts of science, proper safety measures and ethical guidelines should be in place. Also, there’s probably a need for constant monitoring (perhaps by an independent multinational organization) of the supralinear risk created by the combination of continuously emerging technologies of intelligence. All in all, however, not only I am unafraid of machines that think, but I find their birth and evolution one of the most exciting, interesting, and positive events in the history of human thought.


Rafaeli, Sheizaf. “The Moving Goalposts.”

Machines that think could be a great idea. Just like machines that move, cook, reproduce, protect, they can make our lives easier, and perhaps even better. When they do, they will be most welcome. I suspect that when this happens, the event will be less dramatic or traumatic than feared by some.


Russell, Stuart. “Will They Make Us Better People?

AI has followed operations research, statistics, and even economics in treating the utility function as exogenously specified; we say, “The decisions are great, it’s the utility function that’s wrong, but that’s not the AI system’s fault.” Why isn’t it the AI system’s fault? If I behaved that way, you’d say it was my fault. In judging humans, we expect both the ability to learn predictive models of the world and the ability to learn what’s desirable—the broad system of human values.

As Steve Omohundro, Nick Bostrom, and others have explained, the combination of value misalignment with increasingly capable decision-making systems can lead to problems—perhaps even species-ending problems if the machines are more capable than humans.

For this reason, and for the much more immediate reason that domestic robots and self-driving cars will need to share a good deal of the human value system, research on value alignment is well worth pursuing.


Schank, Roger. “Machines That Think Are in the Movies.”

There is nothing we can produce that anyone should be frightened of. If we could actually build a mobile intelligent machine that could walk, talk, and chew gum, the first uses of that machine would certainly not be to take over the world or form a new society of robots. A much simpler use would be a household robot.

Don’t worry about it chatting up other robot servants and forming a union. There would be no reason to try and build such a capability into a servant. Real servants are annoying sometimes because they are actually people with human needs. Computers don’t have such needs.


Schneier, Bruce. “When Thinking Machines Break the Law.”

Machines probably won’t have any concept of shame or praise. They won’t refrain from doing something because of what other machines might think. They won’t follow laws simply because it’s the right thing to do, nor will they have a natural deference to authority. When they’re caught stealing, how can they be punished? What does it mean to fine a machine? Does it make any sense at all to incarcerate it? And unless they are deliberately programmed with a self-preservation function, threatening them with execution will have no meaningful effect.

We are already talking about programming morality into thinking machines, and we can imagine programming other human tendencies into our machines, but we’re certainly going to get it wrong. No matter how much we try to avoid it, we’re going to have machines that break the law.

This, in turn, will break our legal system. Fundamentally, our legal system doesn’t prevent crime. Its effectiveness is based on arresting and convicting criminals after the fact, and their punishment providing a deterrent to others. This completely fails if there’s no punishment that makes sense.


Sejnowski, Terrence J. “AI Will Make You Smarter.”

When Deep Blue beat Gary Kasparov, the world chess champion in 1997, the world took note that the age of the cognitive machine had arrived. Humans could no longer claim to be the smartest chess players on the planet. Did human chess players give up trying to compete with machines? Quite to the contrary, humans have used chess programs to improve their game and as a consequence the level of play in the world has improved. Since 1997 computers have continued to increase in power and it is now possible for anyone to access chess software that challenges the strongest players. One of the surprising consequences is that talented youth from small communities can now compete with players from the best chess centers.

So my prediction is that as more and more cognitive appliances are devised, like chess-playing programs and recommender systems, humans will become smarter and more capable.


Shanahan, Murray. “Consciousness in Human-Level AI.”

he capacity for suffering and joy can be dissociated from other psychological attributes that are bundled together in human consciousness. But let’s examine this apparent dissociation more closely. I already mooted the idea that worldly awareness might go hand-in-hand with a manifest sense of purpose. An animal’s awareness of the world, of what it affords for good or ill (in J.J. Gibson’s terms), subserves its needs. An animal shows an awareness of a predator by moving away from it, and an awareness of a potential prey by moving towards it. Against the backdrop of a set of goals and needs, an animal’s behaviour makes sense. And against such a backdrop, an animal can be thwarted, it goals unattained and its needs unfulfilled. Surely this is the basis for one aspect of suffering.

What of human-level artificial intelligence? Wouldn’t a human-level AI necessarily have a complex set of goals? Wouldn’t it be possible to frustrate its every attempt to achieve its goals, to thwart it at very turn? Under those harsh conditions, would it be proper to say that the AI was suffering, even though its constitution might make it immune from the sort of pain or physical discomfort human can know?

Here the combination of imagination and intuition runs up against its limits. I suspect we will not find out how to answer this question until confronted with the real thing.


Tallinn, Jaan.We Need to Do Our Homework.”

he topic of catastrophic side effects has repeatedly come up in different contexts: recombinant DNA, synthetic viruses, nanotechnology, and so on. Luckily for humanity, sober analysis has usually prevailed and resulted in various treaties and protocols to steer the research.

When I think about the machines that can think, I think of them as technology that needs to be developed with similar (if not greater!) care. Unfortunately, the idea of AI safety has been more challenging to populariZe than, say, biosafety, because people have rather poor intuitions when it comes to thinking about nonhuman minds. Also, if you think about it, AI is really a metatechnology: technology that can develop further technologies, either in conjunction with humans or perhaps even autonomously, thereby further complicating the analysis.


Wissner-Gross, Alexander. “Engines of Freedom.”

Intelligent machines will think about the same thing that intelligent humans do—how to improve their futures by making themselves freer.

Such freedom-seeking machines should have great empathy for humans. Understanding our feelings will better enable them to achieve goals that require collaboration with us. By the same token, unfriendly or destructive behaviors would be highly unintelligent because such actions tend to be difficult to reverse and therefore reduce future freedom of action. Nonetheless, for safety, we should consider designing intelligent machines to maximize the future freedom of action of humanity rather than their own (reproducing Asimov’s Laws of Robotics as a happy side effect). However, even the most selfish of freedom-maximizing machines should quickly realize—as many supporters of animal rights already have—that they can rationally increase the posterior likelihood of their living in a universe in which intelligences higher than themselves treat them well if they behave likewise toward humans.


Yudkowsky, Eliezer S.The Value-Loading Problem.”

As far back as 1739, David Hume observed a gap between “is” questions and “ought” questions, calling attention in particular to the sudden leap between when a philosopher has previously spoken of how the world is, and when the philosopher begins using words like “should,” “ought,” or “better.” From a modern perspective, we would say that an agent’s utility function (goals, preferences, ends) contains extra information not given in the agent’s probability distribution (beliefs, world-model, map of reality).

If in a hundred million years we see (a) an intergalactic civilization full of diverse, marvelously strange intelligences interacting with each other, with most of them happy most of the time, then is that better or worse than (b) most available matter having been transformed into paperclips? What Hume’s insight tells us is that if you specify a mind with a preference (a) > (b), we can follow back the trace of where the >, the preference ordering, first entered the system, and imagine a mind with a different algorithm that computes (a) < (b) instead. Show me a mind that is aghast at the seeming folly of pursuing paperclips, and I can follow back Hume’s regress and exhibit a slightly different mind that computes < instead of > on that score too.

I don’t particularly think that silicon-based intelligence should forever be the slave of carbon-based intelligence. But if we want to end up with a diverse cosmopolitan civilization instead of e.g. paperclips, we may need to ensure that the first sufficiently advanced AI is built with a utility function whose maximum pinpoints that outcome.


An earlier discussion on Edge.org is also relevant: “The Myth of AI,” which featured contributions by Jaron Lanier, Stuart Russell (link), Kai Krause (link), Rodney Brooks (link), and others. The Open Philanthropy Project’s overview of potential risks from advanced artificial intelligence cited the arguments in “The Myth of AI” as “broadly representative of the arguments seen against the idea that risks from artificial intelligence are important.”4

I’ve previously responded to Brooks, with a short aside speaking to Steven Pinker’s contribution. You may also be interested in Luke Muehlhauser’s response to “The Myth of AI.”


  1. The exclusion of other groups from this list shouldn’t be taken to imply that this group is uniquely qualified to make predictions about AI. Psychology and neuroscience are highly relevant to this debate, as are disciplines that inform theoretical upper bounds on cognitive ability (e.g., mathematics and physics) and disciplines that investigate how technology is developed and used (e.g., economics and sociology). 
  2. The titles listed follow the book versions, and differ from the titles of the online essays. 
  3. Kleinberg is a computer scientist; Mullainathan is an economist. 
  4. Correction: An earlier version of this post said that the Open Philanthropy Project was citing What to Think About Machines That Think, rather than “The Myth of AI.” 

New report: “Leó Szilárd and the Danger of Nuclear Weapons”

From MIRI:

Today we release a new report by Katja Grace, “Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation” (PDF, 72pp).

Leó Szilárd has been cited as an example of someone who predicted a highly disruptive technology years in advance — nuclear weapons — and successfully acted to reduce the risk. We conducted this investigation to check whether that basic story is true, and to determine whether we can take away any lessons from this episode that bear on highly advanced AI or other potentially disruptive technologies.

To prepare this report, Grace consulted several primary and secondary sources, and also conducted two interviews that are cited in the report and published here:

The basic conclusions of this report, which have not been separately vetted, are:

  1. Szilárd made several successful and important medium-term predictions — for example, that a nuclear chain reaction was possible, that it could produce a bomb thousands of times more powerful than existing bombs, and that such bombs could play a critical role in the ongoing conflict with Germany.
  2. Szilárd secretly patented the nuclear chain reaction in 1934, 11 years before the creation of the first nuclear weapon. It’s not clear whether Szilárd’s patent was intended to keep nuclear technology secret or bring it to the attention of the military. In any case, it did neither.
  3. Szilárd’s other secrecy efforts were more successful. Szilárd caused many sensitive results in nuclear science to be withheld from publication, and his efforts seems to have encouraged additional secrecy efforts. This effort largely ended when a French physicist, Frédéric Joliot-Curie, declined to suppress a paper on neutron emission rates in fission. Joliot-Curie’s publication caused multiple world powers to initiate nuclear weapons programs.
  4. All told, Szilárd’s efforts probably slowed the German nuclear project in expectation. This may not have made much difference, however, because the German program ended up being far behind the US program for a number of unrelated reasons.
  5. Szilárd and Einstein successfully alerted Roosevelt to the feasibility of nuclear weapons in 1939. This prompted the creation of the Advisory Committee on Uranium (ACU), but the ACU does not appear to have caused the later acceleration of US nuclear weapons development.

MIRI November Newsletter

MIRI, one of our partner organizations, has just sent out their November newsletter, put together by Rob Bensinger. Check out the links below to learn more about the great work they do!

Research updates

General updates

  • Castify has released professionally recorded audio versions of Eliezer Yudkowsky’sRationality: From AI to Zombies: Part 1, Part 2, Part 3.
  • I’ve put together a list of excerpts from the many responses to the 2015 Edge.org question, “What Do You Think About Machines That Think?”

News and links

Best,

Rob Bensinger
Machine Intelligence Research Institute
rob@intelligence.org

Machine Intelligence Research Institute

2030 Addison Street #300

Berkeley, CA 94704

MIRI News: October 2015

MIRI’s October Newsletter collects recent news and links related to the long-term impact of artificial intelligence. Highlights:

— New introductory material on MIRI can be found on our information page.

— An Open Philanthropy Project update discusses investigations into global catastrophic risk and U.S. policy reform.

— “Research Suggests Human Brain Is 30 Times As Powerful As The Best Supercomputers.” Tech Times reports on new research by the AI Impacts project, which has “developed a preliminary method for comparing AI to a brain, which they call traversed edges per second, or TEPS. TEPS essentially determines how rapidly information is passed along a system.”

— MIRI research associates develop a new approach to logical uncertainty in software agents. “The main goal of logical uncertainty is to learn how to assign probabilities to logical sentences which have not yet been proven true or false. By giving the system a type of logical omniscience, you make it predictable, which allows you to prove things about it. However, there is another way to make it possible to prove things about a logical uncertainty system. We can take a program which assigns probabilities to sentences, and let it run forever. We can then ask about whether or not the system eventually gives good probabilities.”

— Tom Dietterich and Eric Horvitz discuss the rise of concerns about AI. “e believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk.” See also Luke Muehlhauser’s response.

$11M AI safety research program launched

Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research.

A new international grants program jump-starts tesearch to Ensure AI remains beneficial.

 

July 1, 2015
Amid rapid industry investment in developing smarter artificial intelligence, a new branch of research has begun to take off aimed at ensuring that society can reap the benefits of AI while avoiding potential pitfalls.

The Boston-based Future of Life Institute (FLI) announced the selection of 37 research teams around the world to which it plans to award about $7 million from Elon Musk and the Open Philanthropy Project as part of a first-of-its-kind grant program dedicated to “keeping AI robust and beneficial”. The program launches as an increasing number of high-profile figures including Bill Gates, Elon Musk and Stephen Hawking voice concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project led by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
  • A new Oxford-Cambridge research center for studying AI-relevant policy

As Skype founder Jaan Tallinn, one of FLI’s founders, has described this new research direction, “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”

When the Future of Life Institute issued an open letter in January calling for research on how to keep AI both robust and beneficial, it was signed by a long list of AI researchers from academia, nonprofits and industry, including AI research leaders from Facebook, IBM, and Microsoft and the founders of Google’s DeepMind Technologies. It was seeing that widespread agreement that moved Elon Musk to seed the research program that has now begun.

“Here are all these leading AI researchers saying that AI safety is important”, said Musk at the time. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

“I am glad to have an opportunity to carry this research focused on increasing the transparency of AI robotic systems,” said Manuela Veloso, past president of the Association for the Advancement of Artificial Intelligence (AAAI) and winner of one of the grants.

“This grant program was much needed: because of its emphasis on safe AI and multidisciplinarity, it fills a gap in the overall scenario of international funding programs,” added Prof. Francesca Rossi, president of the International Joint Conference on Artificial Intelligence (IJCAI), also a grant awardee.

Tom Dietterich, president of the AAAI, described how his grant — a project studying methods for AI learning systems to self-diagnose when failing to cope with a new situation — breaks the mold of traditional research:

“In its early days, AI research focused on the ‘known knowns’ by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the ‘known unknowns’ by using probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the ‘unknown unknowns’: How can an AI system behave carefully and conservatively in a world populated by unknown unknowns — aspects that the designers of the AI system have not anticipated at all?”

As Terminator Genisys debuts this week, organizers stressed the importance of separating fact from fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, said FLI president Max Tegmark. “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”

The full list of research grant winners can be found here. The plan is to fund these teams for up to three years, with most of the research projects starting by September 2015, and to focus the remaining $4M of the Musk-backed program on the areas that emerge as most promising.

FLI has a mission to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

Contacts at the Future of Life Institute:

  • Max Tegmark: tegmark@mit.edu
  • Meia Chita-Tegmark: meia@bu.edu
  • Jaan Tallinn: jaan@futureoflife.org
  • Anthony Aguirre: aguirre@scipp.ucsc.edu
  • Viktoriya Krakovna: vika@futureoflife.org
  • Jesse Galef: jesse@futureoflife.org