Posts in this category appear in the left sidebar (column 3).

United Nations Adopts Ban on Nuclear Weapons

Today, 72 years after their invention, states at the United Nations formally adopted a treaty which categorically prohibits nuclear weapons.

With 122 votes in favor, one vote against, and one country abstaining, the “Treaty on the Prohibition of Nuclear Weapons” was adopted Friday morning and will open for signature by states at the United Nations in New York on September 20, 2017. Civil society organizations and more than 140 states have participated throughout negotiations.

On adoption of the treaty, ICAN Executive Director Beatrice Fihn said:

“We hope that today marks the beginning of the end of the nuclear age. It is beyond question that nuclear weapons violate the laws of war and pose a clear danger to global security. No one believes that indiscriminately killing millions of civilians is acceptable – no matter the circumstance – yet that is what nuclear weapons are designed to do.”

In a public statement, Former Secretary of Defense William Perry said:

“The new UN Treaty on the Prohibition of Nuclear Weapons is an important step towards delegitimizing nuclear war as an acceptable risk of modern civilization. Though the treaty will not have the power to eliminate existing nuclear weapons, it provides a vision of a safer world, one that will require great purpose, persistence, and patience to make a reality. Nuclear catastrophe is one of the greatest existential threats facing society today, and we must dream in equal measure in order to imagine a world without these terrible weapons.”

Until now, nuclear weapons were the only weapons of mass destruction without a prohibition treaty, despite the widespread and catastrophic humanitarian consequences of their intentional or accidental detonation. Biological weapons were banned in 1972 and chemical weapons in 1992.

This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and does not consider them legitimate tools of war. The repeated objection and boycott of the negotiations by many nuclear-weapon states demonstrates that this treaty has the potential to significantly impact their behavior and stature. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviors, even in states not party to the treaty.

“This is a triumph for global democracy, where the pro-nuclear coalition of Putin, Trump and Kim Jong-Un were outvoted by the majority of Earth’s countries and citizens,” said MIT Professor and FLI President Max Tegmark.

“The strenuous and repeated objections of nuclear armed states is an admission that this treaty will have a real and lasting impact,” Fihn said.

The treaty also creates obligations to support the victims of nuclear weapons use (Hibakusha) and testing and to remediate the environmental damage caused by nuclear weapons.

From the beginning, the effort to ban nuclear weapons has benefited from the broad support of international humanitarian, environmental, nonproliferation, and disarmament organizations in more than 100 states. Significant political and grassroots organizing has taken place around the world, and many thousands have signed petitions, joined protests, contacted representatives, and pressured governments.

“The UN treaty places a strong moral imperative against possessing nuclear weapons and gives a voice to some 130 non-nuclear weapons states who are equally affected by the existential risk of nuclear weapons. … My hope is that this treaty will mark a sea change towards global support for the abolition of nuclear weapons. This global threat requires unified global action,” said Perry.

Fihn added, “Today the international community rejected nuclear weapons and made it clear they are unacceptable.It is time for leaders around the world to match their values and words with action by signing and ratifying this treaty as a first step towards eliminating nuclear weapons.”

 

Images courtesy of ICAN.

 

WHAT THE TREATY DOES

Comprehensively bans nuclear weapons and related activity. It will be illegal for parties to undertake any activities related to nuclear weapons. It bans the use, development, testing, production, manufacturing, acquiring, possession, stockpiling, transferring, receiving, threatening to use, stationing, installation, or deploying of nuclear weapons.  [Article 1]

Bans any assistance with prohibited acts. The treaty bans assistance with prohibited acts, and should be interpreted as prohibiting states from engaging in military preparations and planning to use nuclear weapons, financing their development and manufacture, or permitting the transit of them through territorial waters or airspace. [Article 1]

Creates a path for nuclear states which join to eliminate weapons, stockpiles, and programs. It requires states with nuclear weapons that join the treaty to remove them from operational status and destroy them and their programs, all according to plans they would submit for approval. It also requires states which have other country’s weapons on their territory to have them removed. [Article 4]

Verifies and safeguards that states meet their obligations. The treaty requires a verifiable, time-bound, transparent, and irreversible destruction of nuclear weapons and programs and requires the maintenance and/or implementation of international safeguards agreements. The treaty permits safeguards to become stronger over time and prohibits weakening of the safeguard regime. [Articles 3 and 4]

Requires victim and international assistance and environmental remediation. The treaty requires states to assist victims of nuclear weapons use and testing, and requires environmental remediation of contaminated areas. The treaty also obliges states to provide international assistance to support the implementation of the treaty. The text requires states to join the Treaty, and to encourage others to join, as well as to meet regularly to review progress. [Articles 6, 7, and 8]

NEXT STEPS

Opening for signature. The treaty will be open for signature on 20 September at the United Nations in New York. [Article 13]

Entry into force. Fifty states are required to ratify the treaty for it to enter into force.  At a national level, the process of ratification varies, but usually requires parliamentary approval and the development of national legislation to turn prohibitions into national legislation. This process is also an opportunity to elaborate additional measures, such as prohibiting the financing of nuclear weapons. [Article 15]

First meeting of States Parties. The first Meeting of States Parties will take place within a year after the entry into force of the Convention. [Article 8]

SIGNIFICANCE AND IMPACT OF THE TREATY

Delegitimizes nuclear weapons. This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and do not consider them legitimate weapons, creating the foundation of a new norm of international behaviour.

Changes party and non-party behaviour. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviours, even in states not party to the treaty. This is true for treaties ranging from those banning cluster munitions and land mines to the Convention on the law of the sea. The prohibition on assistance will play a significant role in changing behaviour given the impact it may have on financing and military planning and preparation for their use.

Completes the prohibitions on weapons of mass destruction. The treaty completes work begun in the 1970s, when Chemical weapons were banned, and the 1990s when biological weapons were banned.

Strengthens International Humanitarian Law (“Laws of War”). Nuclear weapons are intended to kill millions of civilians – non-combatants – a gross violation of International Humanitarian Law. Few would argue that the mass slaughter of civilians is acceptable and there is no way to use a nuclear weapon in line with international law. The treaty strengthens these bodies of law and norm.

Remove the prestige associated with proliferation. Countries often seek nuclear weapons for the prestige of being seen as part of an important club. By more clearly making nuclear weapons an object of scorn rather than achievement, their spread can be deterred.

FLI sought to increase support for the negotiations from the scientific community this year. We organized an open letter signed by over 3700 scientists in 100 countries, including 30 Nobel Laureates. You can see the letter here and the video we presented recently at the UN here.

This post is a modified version of the press release provided by the International Campaign to Abolish Nuclear Weapons (ICAN).

Hawking, Higgs and Over 3,000 Other Scientists Support UN Nuclear Ban Negotiations

Delegates from most UN member states are gathering in New York to negotiate a nuclear weapons ban, where they will also receive a letter of support that has been signed by thousands of scientists from around over 80 countries – including 28 Nobel Laureates and a former US Secretary of Defense. “Scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them and discovered that their effects are even more horrific than first thought”, the letter explains.

The letter was delivered at a ceremony at 1pm on Monday March 27 in the UN General Assembly Hall to Her Excellency Ms. Elayne Whyte Gómez from Costa Rica, who is presiding over the negotiations.

Despite all the attention to nuclear terrorism and nuclear rogue states, one of the greatest threats from nuclear weapons has always been mishaps and accidents among the established nuclear nations. With political tensions and instability increasing, this threat is growing to alarming levels: “The probability of a nuclear calamity is higher today, I believe, that it was during the cold war,” according to former U.S. Secretary of Defense William J. Perry, who signed the letter.

“Nuclear weapons represent one of the biggest threats to our civilization. With the unpredictability of the current world situation, it is more important than ever to get negotiations about a ban on nuclear weapons on track, and to make these negotiations a truly global effort,” says neuroscience professor Edvard Moser from Norway, 2014 Nobel Laureate in Physiology/Medicine.

Professor Wolfgang Ketterle from MIT, 2001 Nobel Laureate in Physics, agrees: “I see nuclear weapons as a real threat to the human race and we need an international consensus to reduce this threat.”

Currently, the US and Russia have about 14,000 nuclear weapons combined, many on hair-trigger alert and ready to be launched on minutes notice, even though a Pentagon report argued that a few hundred would suffice for rock-solid deterrence. Yet rather than trim their excess arsenals, the superpowers plan massive investments to replace their nuclear weapons by new destabilizing ones that are more lethal for a first strike attack.

“Unlike many of the world’s leaders I care deeply about the future of my grandchildren. Even the remote possibility of a nuclear war presents an unconscionable threat to their welfare. We must find a way to eliminate nuclear weapons,” says Sir Richard J. Roberts, 1993 Nobel Laureate in Physiology or Medicine.

“Most governments are frustrated that a small group of countries with a small fraction of the world’s population insist on retaining the right to ruin life on Earth for everyone else with nuclear weapons, ignoring their disarmament promises in the non-proliferation treaty”, says physics professor Max Tegmark from MIT, who helped organize the letter. “In South Africa, the minority in control of the unethical Apartheid system didn’t give it up spontaneously on their own initiative, but because they were pressured into doing so by the majority. Similarly, the minority in control of unethical nuclear weapons won’t give them up spontaneously on their own initiative, but only if they’re pressured into doing so by the majority of the world’s nations and citizens.”

The idea behind the proposed ban is to provide such pressure by stigmatizing nuclear weapons.

Beatrice Fihn, who helped launch the ban movement as Executive Director of the International Campaign to Abolish Nuclear Weapons, explains that such stigmatization made the landmine and cluster munitions bans succeed and can succeed again: “The market for landmines is pretty much extinct—nobody wants to produce them anymore because countries have banned and stigmatized them.  Just a few years ago, the United States—who never signed the landmines treaty—announced that it’s basically complying with the treaty. If the world comes together in support of a nuclear ban, then nuclear weapons countries will likely follow suit, even if it doesn’t happen right away.

Susi Snyder from from the Dutch “Don’t Bank on the Bomb” project explains:

If you prohibit the production, possession, and use of these weapons and the assistance with doing those things, we’re setting a stage to also prohibit the financing of the weapons. And that’s one way that I believe the ban treaty is going to have a direct and concrete impact on the ongoing upgrades of existing nuclear arsenals, which are largely being carried out by private contractors.”

Nuclear arms are the only weapons of mass destruction not yet prohibited by an international convention, even though they are the most destructive and indiscriminate weapons ever created”, the letter states, motivating a ban.

“The horror that happened at Hiroshima and Nagasaki should never be repeated.  Nuclear weapons should be banned,” says Columbia University professor Martin Chalfie, 2008 Nobel Laureate in Chemistry.

Norwegian neuroscience professor May-Britt Moser, a 2014 Nobel Laureate in Physiology/Medicine, says, “In a world with increased aggression and decreasing diplomacy – the availability nuclear weapons is more dangerous than ever. Politicians are urged to ban nuclear weapons. The world today and future generations depend on that decision.”

The open letter: https://futureoflife.org/nuclear-open-letter/

A Principled AI Discussion in Asilomar

We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.

This sense among the attendees echoes a wider societal engagement with AI that has heated up dramatically over the past few years. Due to this rising awareness of AI, dozens of major reports have emerged from academia (e.g. the Stanford 100 year report), government (e.g. two major reports from the White House), industry (e.g. materials from the Partnership on AI), and the nonprofit sector (e.g. a major IEEE report).

In planning the Asilomar meeting, we hoped both to create meaningful discussion among the attendees, and also to see what, if anything, this rather heterogeneous community actually agreed on. We gathered all the reports we could and compiled a list of scores of opinions about what society should do to best manage AI in coming decades. From this list, we looked for overlaps and simplifications, attempting to distill as much as we could into a core set of principles that expressed some level of consensus. But this “condensed” list still included ambiguity, contradiction, and plenty of room for interpretation and worthwhile discussion.

Leading up to the meeting, we extensively surveyed meeting participants about the list, gathering feedback, evaluation, and suggestions for improved or novel principles. The responses were folded into a significantly revised version for use at the meeting. In Asilomar, we gathered more feedback in two stages. First, small breakout groups discussed subsets of the principles, giving detailed refinements and commentary on them. This process generated improved versions (in some cases multiple new competing versions) and a few new principles. Finally, we surveyed the full set of attendees to determine the level of support for each version of each principle.

After such detailed, thorny and sometimes contentious discussions and a wide range of feedback, we were frankly astonished at the high level of consensus that emerged around many of the statements during that final survey. This consensus allowed us to set a high bar for inclusion in the final list: we only retained principles if at least 90% of the attendees agreed on them.

What remained was a list of 23 principles ranging from research strategies to data rights to future issues including potential super-intelligence, which was signed by those wishing to associate their name with the list. This collection of principles is by no means comprehensive and it’s certainly open to differing interpretations, but it also highlights how the current “default” behavior around many relevant issues could violate principles that most participants agreed are important to uphold.

We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years.

To start the discussion, here are some of the things other AI researchers who signed the Principles had to say about them.

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“Value alignment is a big one. Robots aren’t going to try to revolt against humanity, but they’ll just try to optimize whatever we tell them to do. So we need to make sure to tell them to optimize for the world we actually want.”

-Anca Dragan, Assistant Professor in the EECS Department at UC Berkeley, and co-PI for the Center for Human Compatible AI
Read her complete interview here.

Shared Prosperity
“I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously — I mean for their personal gain. And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.”

-Yoshua Bengio, Professor of CSOR at the University of Montreal, and head of the Montreal Institute for Learning Algorithms (MILA)
Read his complete interview here.

Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“I believe that AI will create profound change even before it is ‘advanced’ and thus we need to plan and manage growth of the technology. As humans we are not good at long-term planning because our civil systems don’t encourage it, however, this is an area in which we must develop our abilities to ensure a responsible and beneficial partnership between man and machine.”

-Kay Firth-Butterfield, Executive Director of AI-Austin.org, and an adjunct Professor of Law at the University of Texas at Austin
Read her complete interview here.

Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
“It’s absolutely crucial that individuals should have the right to manage access to the data they generate… AI does open new insight to individuals and institutions. It creates a persona for the individual or institution – personality traits, emotional make-up, lots of the things we learn when we meet each other. AI will do that too and it’s very personal. I want to control how [my] persona is created. A persona is a fundamental right.”

-Guruduth Banavar, VP, IBM Research, Chief Science Officer, Cognitive Computing
Read his complete interview here.

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“The one closest to my heart. … AI systems should behave in a way that is aligned with human values. But actually, I would be even more general than what you’ve written in this principle. Because this principle has to do not only with autonomous AI systems, but I think this is very important and essential also for systems that work tightly with humans in the loop, and also where the human is the final decision maker. Because when you have human and machine tightly working together, you want this to be a real team. So you want the human to be really sure that the AI system works with values aligned to that person. It takes a lot of discussion to understand those values.”

-Francesca Rossi, Research scientist at the IBM T.J. Watson Research Centre, and a professor of computer science at the University of Padova, Italy, currently on leave
Read her complete interview here.

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“One reason that I got involved in these discussions is that there are some topics I think are very relevant today, and one of them is the arms race that’s happening amongst militaries around the world already, today. This is going to be very destabilizing. It’s going to upset the current world order when people get their hands on these sorts of technologies. It’s actually stupid AI that they’re going to be fielding in this arms race to begin with and that’s actually quite worrying – that it’s technologies that aren’t going to be able to distinguish between combatants and civilians, and aren’t able to act in accordance with international humanitarian law, and will be used by despots and terrorists and hacked to behave in ways that are completely undesirable. And that’s something that’s happening today. You have to see the recent segment on 60 Minutes to see the terrifying swarms of robot UAVs that the American military is now experimenting with.”

-Toby Walsh, Guest Professor at Technical University of Berlin, Professor of Artificial Intelligence at the University of New South Wales, and leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research
Read his complete interview here.

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“I’m not a fan of wars, and I think it could be extremely dangerous. Obviously I think that the technology has a huge potential, and even just with the capabilities we have today it’s not hard to imagine how it could be used in very harmful ways. I don’t want my contributions to the field and any kind of techniques that we’re all developing to do harm to other humans or to develop weapons or to start wars or to be even more deadly than what we already have.”

-Stefano Ermon, Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory
Read his complete interview here.

Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“I agree! As a scientist, I’m against making strong or unjustified assumptions about anything, so of course I agree. Yet this principle bothers me … because it seems to be implicitly saying that there is an immediate danger that AI is going to become superhumanly, generally intelligent very soon, and we need to worry about this issue. This assertion … concerns me because I think it’s a distraction from what are likely to be much bigger, more important, more near term, potentially devastating problems. I’m much more worried about job loss and the need for some kind of guaranteed health-care, education and basic income than I am about Skynet. And I’m much more worried about some terrorist taking an AI system and trying to program it to kill all Americans than I am about an AI system suddenly waking up and deciding that it should do that on its own.”

-Dan Weld, Professor of Computer Science & Engineering and Entrepreneurial Faculty Fellow at the University of Washington
Read his complete interview here.

Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“In many areas of computer science, such as complexity or cryptography, the default assumption is that we deal with the worst case scenario. Similarly, in AI Safety, we should assume that AI will become maximally capable and prepare accordingly. If we are wrong, we will still be great shape.”

-Roman Yampolskiy, Associate Professor of CECS at the University of Louisville, and founding director of the Cyber Security Lab
Read his complete interview here.

Obama’s Nuclear Legacy

The following article and infographic were originally posted on Futurism.

The most destructive device that humanity ever created is the nuclear bomb. It’s a technology that is capable of unparalleled devastation; it’s a technology that The United Nations classifies as “the most dangerous weapon on Earth.”

One bomb can destroy a whole city in seconds, and in so doing, end the lives of millions of people (depending on where it is dropped). If that’s not enough, it can throw the natural environment into chaos. We know this because we’ve used them before.

The first device of this kind was unleashed at approximately 8:15 am on August 6th, 1945. At this time, a US B-29 bomber dropped an atomic bomb on the Japanese city of Hiroshima. It killed around 80,000 people instantly. Over the coming years, many more would succumb to radiation sickness. All-in-all, it is estimated that over 200,000 people died as a result of the nuclear blasts in Japan.

How far have we come since then? How many bombs do we have at our disposal? Here’s a look at our legacy.

Hawking Says ‘Don’t Bank on the Bomb’ and Cambridge Votes to Divest $ 1Billion From Nuclear Weapons

1,000 nuclear weapons are plenty enough to deter any nation from nuking the US, but we’re hoarding over 7,000, and a long string of near-misses have highlighted the continuing risk of an accidental nuclear war which could trigger a nuclear winter, potentially killing most people on Earth. Yet rather than trimming our excess nukes, we’re planning to spend $4 million per hour for the next 30 years making them more lethal.

Although I’m used to politicians wasting my tax dollars, I was shocked to realize that I was voluntarily using my money for this nuclear boondoggle by investing in the very companies that are lobbying for and building new nukes: some of the money in my bank account gets loaned to them and my S&P500 mutual fund invests in them. “If you want to slow the nuclear arms race, then put your money where your mouth is and don’t bank on the bomb!”, my physics colleague Stephen Hawking told me. To make it easier for others to follow his sage advice, I made an app for that together with my friends at the Future of Life Institute, and launched this “Brief History of Nukes” that’s 3.14 long in honor of Hawking’s fascination with pi.

Our campaign got off to an amazing start this weekend at an MIT conferencewhere our Mayor Denise Simmons announced that the Cambridge City Council has unanimously decided to divest their billion dollar city pension fund from nuclear weapons production. “Not in our name!”, she said, and drew a standing ovation. “It’s my hope that this will inspire other municipalities, companies and individuals to look at their investments and make similar moves”.

“In Europe, over 50 large institutions have already limited their nuclear weapon investments, but this is our first big success in America”, said Susi Snyder, who leads the global nuclear divestment campaign dontbankonthebomb.com. Boston College philosophy major Lucas Perry, who led the effort to persuade Cambridge to divest, hoped that this online analysis tool will create a domino effect: “I want to empower other students opposing the nuclear arms race to persuade their own towns and universities to follow suit.”

Many financial institutions now offer mutual funds that cater to the growing interest in socially responsible investing, including Ariel, Calvert, Domini, Neuberger, Parnassuss, Pax World and TIAA-CREF. “We appreciate and share Cambridge’s desire to exclude nuclear weapons production from its pension fund. Pension funds are meant to serve the long-term needs of retirees, a service that nuclear weapons do not offer”, said Julie Fox Gorte, Senior Vice President for Sustainable Investing at Pax World.

“Divestment is a powerful way to stigmatize the nuclear arms race through grassroots campaigning, without having to wait for politicians who aren’t listening”, said conference co-organizer Cole Harrison, Executive Director of Massachusetts Peace Action, the nation’s largest grassroots peace organization. “If you’re against spending more money making us less safe, then make sure it’s not your money.”

You’ll find our divestment app here. If you’d like to persuade your own municipality to follow Cambridge’s lead, using their policy order as a model, here it is:

WHEREAS: Nations across the globe still maintain over 15,000 nuclear weapons, some of which are hundreds of times more powerful than those that obliterated Hiroshima and Nagasaki, and detonation of even a small fraction of these weapons could create a decade-long nuclear winter that could destroy most of the Earth’s population; and
WHEREAS: The United States has plans to invest roughly one trillion dollars over the coming decades to upgrade its nuclear arsenal, which many experts believe actually increases the risk of nuclear proliferation, nuclear terrorism, and accidental nuclear war; and
WHEREAS: In a period where federal funds are desperately needed in communities like Cambridge in order to build affordable housing, improve public transit, and develop sustainable energy sources, our tax dollars are being diverted to and wasted on nuclear weapons upgrades that would make us less safe; and
WHEREAS: Investing in companies producing nuclear weapons implicitly supports this misdirection of our tax dollars; and
WHEREAS: Socially responsible mutual funds and other investment vehicles are available that accurately match the current asset mix of the City of Cambridge Retirement Fund while excluding nuclear weapons producers; and
WHEREAS: The City of Cambridge is already on record in supporting the abolition of nuclear weapons, opposing the development of new nuclear weapons, and calling on President Obama to lead the nuclear disarmament effort; now therefore be it
ORDERED: That the City Council go on record opposing investing funds from the Cambridge Retirement System in any entities that are involved in or support the production or upgrading of nuclear weapons systems; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Cambridge Peace Commissioner and other appropriate City staff to organize an informational forum on possibilities for Cambridge individuals and institutions to divest their pension funds from investments in nuclear weapons contractors; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Board of the Cambridge Retirement System and other appropriate City staff to ensure divestment from all companies involved in production of nuclear weapons systems, and in entities investing in such companies, and the City Manager is requested to report back to the City Council about the implementation of said divestment in a timely manner.

AAAI Safety Workshop Highlights: Debate, Discussion, and Future Research

The 30th annual Association for the Advancement of Artificial Intelligence (AAAI) conference kicked off on February 12 with two days of workshops, followed by the main conference, which is taking place this week. FLI is honored to have been a part of the AI, Ethics, and Safety Workshop that took place on Saturday, February 13.

phoenix_convention_center1

Phoenix Convention Center where AAAI 2016 is taking place.

The workshop featured many fascinating talks and discussions, but perhaps the most contested and controversial was that by Toby Walsh, titled, “Why the Technological Singularity May Never Happen.”

Walsh explained that, though general knowledge has increased, human capacity for learning has remained relatively consistent for a very long time. “Learning a new language is still just as hard as it’s always been,” he said, to provide an example. If we can’t teach ourselves how to learn faster he doesn’t see any reason to believe that machines will be any more successful at the task.

He also argued that even if we assume that we can improve intelligence, there’s no reason to assume it will increase exponentially, leading to an intelligence explosion. He believes it is just as possible that machines could develop intelligence and learning that increases by half for each generation, thus it would increase, but not exponentially, and it would be limited.

Walsh does anticipate superintelligent systems, but he’s just not convinced they will be the kind that can lead to an intelligence explosion. In fact, as one of the primary authors of the Autonomous Weapons Open Letter, Walsh is certainly concerned about aspects of advanced AI, and he ended his talk with concerns about both weapons and job loss.

Both during and after his talk, members of the audience vocally disagreed, providing various arguments about why an intelligence explosion could be likely. Max Tegmark drew laughter from the crowd when he pointed out that while Walsh was arguing that a singularity might not happen, the audience was arguing that it might happen, and these “are two perfectly consistent viewpoints.”

Tegmark added, “As long as one is not sure if it will happen or it won’t, it’s wise to simply do research and plan ahead and try to make sure that things go well.”

As Victoria Krakovna has also explained in a previous post, there are other risks associated with AI that can occur without an intelligence explosion.

The afternoon portion of the talks were all dedicated to technical research by current FLI grant winners, including Vincent Conitzer, Fuxin Li, Francesca Rossi, Bas Steunebrink, Manuela Veloso, Brian Ziebart, Jacob Steinhardt, Nate Soares, Paul Christiano, Stefano Ermon, and Benjamin Rubinstein. Topics ranged from ensuring value alignments between humans and AI to safety constraints and security evaluation, and much more.

While much of the research presented will apply to future AI designs and applications, Li and Rubinstein presented examples of research related to image recognition software that could potentially be used more immediately.

Li explained the risks associated with visual recognition software, including how someone could intentionally modify the image in a human-imperceptible way to make it incorrectly identify the image. Current methods rely on machines accessing huge quantities of images to reference and learn what any given image is. However, even the smallest perturbation of the data can lead to large errors. Li’s own research looks at unique ways for machines to recognize an image, thus limiting the errors.

Rubinstein’s focus is geared more toward security. The research he presented at the workshop is similar to facial recognition, but goes a step farther, to understand how small changes made to one face can lead systems to confuse the image with that of someone else.

Fuxin Li

Fuxin Li

rubinstein_AAAI

Ben Rubinstein

 

 

AAAI_panel

Future of beneficial AI research panel: Francesca Rossi, Nate Soares, Tom Dietterich, Roman Yampolskiy, Stefano Ermon, Vincent Conitzer, and Benjamin Rubinstein.

The day ended with a panel discussion on the next steps for AI safety research that also drew much debate between panelists and the audience. The panel included AAAI president, Tom Dietterich, as well as Rossi, Soares, Conitzer, Ermon, Rubinstein, and Roman Yampolskiy, who also spoke earlier in the day.

Among the prevailing themes were concerns about ensuring that AI is used ethically by its designers, as well as ensuring that a good AI can’t be hacked to do something bad. There were suggestions to build on the idea that AI can help a human be a better person, but again, concerns about abuse arose. For example, an AI could be designed to help voters determine which candidate would best serve their needs, but then how can we ensure that the AI isn’t secretly designed to promote a specific candidate?

Judy Goldsmith, sitting in the audience, encouraged the panel to consider whether or not an AI should be able to feel pain, which led to extensive discussion about the pros and cons of creating an entity that can suffer, as well as questions about whether such a thing could be created.

Francesca_Nate

Francesca Rossi and Nate Soares

Tom_Roman

Tom Dietterich and Roman Yampolskiy

After an hour of discussion many suggestions for new research ideas had come up, giving researchers plenty of fodder for the next round of beneficial-AI grants.

We’d also like to congratulate Stuart Russell and Peter Norvig who were awarded the 2016 AAAI/EAAI Outstanding Educator Award for their seminal text “Artificial Intelligence: A Modern Approach.” As was mentioned during the ceremony, their work “inspired a new generation of scientists and engineers throughout the world.”

Norig_Russell_3

Congratulations to Peter Norvig and Stuart Russell!

2015: An Amazing Year in Review

Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…

 

In the beginning

conference150104

Participants and attendees of the inaugural Puerto Rico conference.

2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.

At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.

 

Forging ahead

On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.

On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.

SF_event

All five FLI-founders flanked by other beneficial-AI enthusiasts. From left to right, top to bottom: Stuart Russell. Jaan Tallinn, Janos Kramar, Anthony Aguirre, Max Tegmark, Nick Bostrom, Murray Shanahan, Jesse Galef, Michael Vassar, Nate Soares, Viktoriya Krakovna, Meia Chita-Tegmark and Katja Grace

Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.

This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.

Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.

 

Other major events

Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.

Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.

In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).

military_AI

Concerns about autonomous weapons led to an open letter calling for a ban.

Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.

Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.

September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.

Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.

Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.

Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events.  The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.

 

NIPS-panel-3

Richard Mallah speaking at the third panel discussion of the NIPS symposium.

In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.

 

In the Press

We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:

The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent,  The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

You can find more media coverage of Elon Musk’s donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.

Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.

Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.

Throughout the year, Victoria was interviewed by Popular Science, Engineering and Technology Magazine, Boston Magazine and Blog Talk Radio.

Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.

In addition we had a few extra special articles on our new website:

Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove

On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.

 

A New Beginning

2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!

Happy New Year!

happy_new_year_2016

What’s so exciting about AI? Conversations at the Nobel Week Dialogue

Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.

This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence.” The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?

Although challenges in developing AI and concerns about human-computer interaction were both expressed, in the celebratory spirit of the Nobel Prize, let’s focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.

Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.

Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.

Stuart Russell, professor at UC Berkeley, highlighted AI’s potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.

As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind.”

FLI November Newsletter

News Site Launch
We are excited to present our new xrisk news site! With improved layout and design, it aims to provide you with daily technology news relevant to the long-term future of civilization, covering both opportunities and risks. This will, of course, include news about the projects we and our partner organizations are involved in to help prevent these risks. We’re also developing a section of the site that will provide more background information about the major risks, as well as what people can do to help reduce them and keep society flourishing.
Reducing Risk of Nuclear War

Some investments in nuclear weapons systems might increase the risk of accidental nuclear war and are arguably done primarily for profit rather than national security. Illuminating these financial drivers provides another opportunity to reduce the risk of nuclear war. FLI is pleased to support financial research about who invests in and profits from the production of new nuclear weapons systems, with the aim of drawing attention to and stigmatizing such productions.

On November 12, Don’t Bank on the Bomb released their 2015 report on European financial institutions that have committed to divesting in any companies related to the manufacture of nuclear weapons. The report also highlights financial groups who have made positive steps toward divestment, and it provides a detailed list of companies that are still heavily invested in nuclear weapons. With the Cold War long over, many people don’t realize that the risk of nuclear war still persists and that many experts believe it to be increasing.Here is FLI’s assessment and position of the nuclear weapons situation. 

In case you missed it…
Here are some other interesting things we and our partners have done in the last few months:
  • On September 1, FLI and CSER co-organized an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety in front of a veritable who’s who of the scientifically minded in Westminster, including many British members of parliament.
  • Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety.
  • Stephen Hawking answered the AMA questions about artificial intelligence.
  • Our co-founder, Meia Chita-Tegmark wrote a spooky Halloween op-ed that was featured on the Huffington Post about the man who saved the world from nuclear apocalypse in 1962.
  • Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars.
  • FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists.
  • And two of our partner organizations have published their newsletters. The Machine Intelligence Research Institute (MIRI) published an October and  November newsletter, and the Global Catastrophic Risk Institute released newsletters inSeptember and October.

AI safety conference in Puerto Rico

The Future of AI: Opportunities and Challenges 

This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls (see this open letter and this list of research priorities). To facilitate candid and constructive discussions, there was no media present and Chatham House Rules: nobody’s talks or statements will be shared without their permission.
Where? San Juan, Puerto Rico
When? Arrive by evening of Friday January 2, depart after lunch on Monday January 5 (see program below)

 

conference-1

Scientific organizing committee:

  • Erik Brynjolfsson, MIT, Professor at the MIT Sloan School of Management, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
  • Demis Hassabis, Founder, DeepMind
  • Eric Horvitz, Microsoft, co-chair of the AAAI presidential panel on long-term AI futures
  • Shane Legg, Founder, DeepMind
  • Peter Norvig, Google, Director of Research, co-author of the standard textbook Artificial Intelligence: a Modern Approach.
  • Francesca Rossi, Univ. Padova, Professor of Computer Science, President of the International Joint Conference on Artificial Intelligence
  • Stuart Russell, UC Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.
  • Bart Selman, Cornell University, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures
  • Murray Shanahan, Imperial College, Professor of Cognitive Robotics
  • Mustafa Suleyman, Founder, DeepMind
  • Max Tegmark, MIT, Professor of physics, author of Our Mathematical Universe

Local Organizers:
Anthony Aguirre, Meia Chita-Tegmark, Viktoriya Krakovna, Janos Kramar, Richard Mallah, Max Tegmark, Susan Young

Support: Funding and organizational support for this conference is provided by Skype-founder Jaan Tallinnthe Future of Life Institute and the Center for the Study of Existential Risk.

PROGRAM

Friday January 2:
1600-late: Registration open
1930-2130: Welcome reception (Las Olas Terrace)

Saturday January 3:
0800-0900: Breakfast
0900-1200: Overview (one review talk on each of the four conference themes)
• Welcome
• Ryan Calo (Univ. Washington): AI and the law
• Erik Brynjolfsson (MIT): AI and economics (pdf)
• Richard Sutton (Alberta): Creating human-level AI: how and when? (pdf)
• Stuart Russell (Berkeley): The long-term future of (artificial) intelligence (pdf)
1200-1300: Lunch
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Optimizing the economic impact of AI
(A typical 3-hour session consists of a few 20-minute talks followed by a discussion panel where the panelists who haven’t already given talks get to give brief introductory remarks before the general discussion ensues.)
What can we do now to maximize the chances of reaping the economic bounty from AI while minimizing unwanted side-effects on the labor market?
Speakers:
• Andrew McAfee, MIT (pdf)
• James Manyika, McKinsey (pdf)
• Michael Osborne, Oxford (pdf)
Panelists include Ajay Agrawal (Toronto), Erik Brynjolfsson (MIT), Robin Hanson (GMU), Scott Phoenix (Vicarious)
1900: dinner

Sunday January 4:
0800-0900: Breakfast
0900-1200: Creating human-level AI: how and when?
Short talks followed by panel discussion: will it happen, and if so, when? Via engineered solution, whole brain emulation, or other means? (We defer until the 4pm session questions regarding what will happen, about whether machines will have goals, about ethics, etc.)
Speakers:
• Demis Hassabis, Google/DeepMind
• Dileep George, Vicarious (pdf)
• Tom Mitchell, CMU (pdf)
Panelists include Joscha Bach (MIT), Francesca Rossi (Padova), Richard Mallah (Cambridge Semantics), Richard Sutton (Alberta)
1200-1300: Lunch
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Intelligence explosion: science or fiction?
If an intelligence explosion happens, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? Containment problem? Is “friendly AI” possible? Feasible? Likely to happen?
Speakers:
• Nick Bostrom, Oxford (pdf)
• Bart Selman, Cornell (pdf)
• Jaan Tallinn, Skype founder (pdf)
• Elon Musk, SpaceX, Tesla Motors
Panelists include Shane Legg (Google/DeepMind), Murray Shanahan (Imperial), Vernor Vinge (San Diego), Eliezer Yudkowsky (MIRI)
1930: banquet (outside by beach)

Monday January 5:
0800-0900: Breakfast
0900-1200: Law & ethics: Improving the legal framework for autonomous systems
How should legislation be improved to best protect the AI industry and consumers? If self-driving cars cut the 32000 annual US traffic fatalities in half, the car makers won’t get 16000 thank-you notes, but 16000 lawsuits. How can we ensure that autonomous systems do what we want? And who should be held liable if things go wrong? How tackle criminal AI? AI ethics? AI ethics/legal framework for military systems & financial systems?
Speakers:
• Joshua Greene, Harvard (pdf)
• Heather Roff Perkins, Univ. Denver (pdf)
• David Vladeck, Georgetown
Panelists include Ryan Calo (Univ. Washington), Tom Dietterich (Oregon State, AAAI president), Kent Walker (General Counsel, Google)
1200: Lunch, depart

PARTICIPANTS
You’ll find a list of participants and their bios here.

conference150104 hires

Back row, from left to right: Tom Mitchell, Seán Ó hÉigeartaigh, Huw Price, Shamil Chandaria, Jaan Tallinn, Stuart Russell, Bill Hibbard, Blaise Agüera y Arcas, Anders Sandberg, Daniel Dewey, Stuart Armstrong, Luke Muehlhauser, Tom Dietterich, Michael Osborne, James Manyika, Ajay Agrawal, Richard Mallah, Nancy Chang, Matthew Putman
Other standing, left to right: Marilyn Thompson, Rich Sutton, Alex Wissner-Gross, Sam Teller, Toby Ord, Joscha Bach, Katja Grace, Adrian Weller, Heather Roff-Perkins, Dileep George, Shane Legg, Demis Hassabis, Wendell Wallach, Charina Choi, Ilya Sutskever, Kent Walker, Cecilia Tilli, Nick Bostrom, Erik Brynjolfsson, Steve Crossan, Mustafa Suleyman, Scott Phoenix, Neil Jacobstein, Murray Shanahan, Robin Hanson, Francesca Rossi, Nate Soares, Elon Musk, Andrew McAfee, Bart Selman, Michele Reilly, Aaron VanDevender, Max Tegmark, Margaret Boden, Joshua Greene, Paul Christiano, Eliezer Yudkowsky, David Parkes, Laurent Orseau, JB Straubel, James Moor, Sean Legassick, Mason Hartman, Howie Lempel, David Vladeck, Jacob Steinhardt, Michael Vassar, Ryan Calo, Susan Young, Owain Evans, Riva-Melissa Tez, János Kramár, Geoff Anders, Vernor Vinge, Anthony Aguirre
Seated: Sam Harris, Tomaso Poggio, Marin Soljačić, Viktoriya Krakovna, Meia Chita-Tegmark
Behind the camera: Anthony Aguirre (and also photoshopped in by the human-level intelligence on his left)
Click  for a full resolution version.

Elon Musk donates $10M to keep AI beneficial

Thursday January 15, 2015

We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk’s donation aims to support precisely this type of research: “Here are all these leading AI researchers saying that AI safety is important”, says Elon Musk. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

Musk’s announcement was welcomed by AI leaders in both academia and industry:

“It’s wonderful, because this will provide the impetus to jump-start research on AI safety”, said AAAI president Tom Dietterich. “This addresses several fundamental questions in AI research that deserve much more funding than even this donation will provide.”

“Dramatic advances in artificial intelligence are opening up a range of exciting new applications”, said Demis Hassabis, Shane Legg and Mustafa Suleyman, co-founders of DeepMind Technologies, which was recently acquired by Google. “With these newfound powers comes increased responsibility. Elon’s generous donation will support researchers as they investigate the safe and ethical use of artificial intelligence, laying foundations that will have far reaching societal impacts as these technologies continue to progress”.


Elon Musk and AAAI President Thomas Dietterich comment on the announcement
The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. “I love technology, because it’s what’s made 2015 better than the stone age”, says MIT professor and FLI president Max Tegmark. “Our organization studies how we can maximize the benefits of future technologies while avoiding potential pitfalls.”

The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy (a detailed list of examples can be found here). “Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere”, says FLI co-founder Viktoriya Krakovna.

“This donation will make a major impact”, said UCSC professor and FLI co-founder Anthony Aguirre: “While heavy industry and government investment has finally brought AI from niche academic research to early forms of a potentially world-transforming technology, to date relatively little funding has been available to help ensure that this change is actually a net positive one for humanity.”

“That AI systems should be beneficial in their effect on human society is a given”, said Stuart Russell, co-author of the standard AI textbook “Artificial Intelligence: a Modern Approach”. “The research that will be funded under this program will make sure that happens. It’s an intrinsic and essential part of doing AI research.”

Skype-founder Jaan Tallinn, one of FLI’s founders, agrees: “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”

Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories.

“Hopefully this grant program will help shift our focus from building things just because we can, toward building things because they are good for us in the long term”, says FLI co-founder Meia Chita-Tegmark.

Contacts at Future of Life Institute:

  • Max Tegmark: max@futureoflife.org
  • Meia Chita-Tegmark: meia@futureoflife.org
  • Jaan Tallinn: jaan@futureoflife.org
  • Anthony Aguirre: anthony@futureoflife.org
  • Viktoriya Krakovna: vika@futureoflife.org

Contacts among AI researchers:

  • Prof. Tom Dietterich, President of the Association for the Advancement of Artificial Intelligence (AAAI), Director of Intelligent Systems: tgd@eecs.oregonstate.edu
  • Prof. Stuart Russell, Berkeley, Director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach: russell@cs.berkeley.edu
  • Prof. Bart Selman, co-chair of the AAAI presidential panel on long-term AI futures: selman@cs.cornell.edu
  • Prof. Francesca Rossi, Professor of Computer Science, University of Padova and Harvard University, president of the International Joint Conference on Artificial Intelligence (IJCAI): frossi@math.unipd.it
  • Prof. Murray Shanahan, Imperial College: m.shanahan@imperial.ac.uk


Max Tegmark interviews Elon Musk about his life, his interest in the future of humanity and the background to his donation

Future of AI at SciFoo 2015

Here is a short summary of the Future of AI session organized at SciFoo by Nick Bostrom, Gary Marcus, Jaan Tallinn, Max Tegmark and Murray Shanahan.

Hawking Reddit AMA on AI

Our Scientific Advisory Board member Stephen Hawking’s long-awaited Reddit AMA answers on Artificial Intelligence just came out, and was all over today’s world news, including MSNBCHuffington PostThe Independent and Time.

Read the Q&A below and visit the official Reddit page for the full discussion:

Question 1:

Professor Hawking- Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call “The Terminator Conversation.” My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. In my opinion, this is different from “dangerous AI” as most people perceive it, in that the software has no motives, no sentience, and no evil morality, and is merely (ruthlessly) trying to optimize a function that we ourselves wrote and designed. Your viewpoints (and Elon Musk’s) are often presented by the media as a belief in “evil AI,” though of course that’s not what your signed letter says. Students that are aware of these reports challenge my view, and we always end up having a pretty enjoyable conversation. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?

Answer 1:

You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

Question 2:

Hello Doctor Hawking, thank you for doing this AMA. I am a student who has recently graduated with a degree in Artificial Intelligence and Cognitive Science. Having studied A.I., I have seen first hand the ethical issues we are having to deal with today concerning how quickly machines can learn the personal features and behaviours of people, as well as being able to identify them at frightening speeds. However, the idea of a “conscious” or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?

Answer 2:

The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

Question 3:

Hello, Prof. Hawking. Thanks for doing this AMA! Earlier this year you, Elon Musk, and many other prominent science figures signed an open letter warning the society about the potential pitfalls of Artificial Intelligence. The letter stated: “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.” While being a seemingly reasonable expectation, this statement serves as a start point for the debate around the possibility of Artificial Intelligence ever surpassing the human race in intelligence.
My questions: 1. One might think it impossible for a creature to ever acquire a higher intelligence than its creator. Do you agree? If yes, then how do you think artificial intelligence can ever pose a threat to the human race (their creators)? 2. If it was possible for artificial intelligence to surpass humans in intelligence, where would you define the line of “It’s enough”? In other words, how smart do you think the human race can make AI, while ensuring that it doesn’t surpass them in intelligence?

Answer 3:

It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

Question 4:

I’m rather late to the question-asking party, but I’ll ask anyway and hope. Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done? Thank you for your time and your contributions. I’ve found research to be a largely social endeavor, and you’ve been an inspiration to so many.

Answer 4:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Question 5:

Hello Professor Hawking, thank you for doing this AMA! I’ve thought lately about biological organisms’ will to survive and reproduce, and how that drive evolved over millions of generations. Would an AI have these basic drives, and if not, would it be a threat to humankind? Also, what are two books you think every person should read?

Answer 5:

An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

Question 6:

Thanks for doing this AMA. I am a biologist. Your fear of AI appears to stem from the assumption that AI will act like a new biological species competing for the same resources or otherwise transforming the planet in ways incompatible with human (or other) life. But the reason that biological species compete like this is because they have undergone billions of years of selection for high reproduction. Essentially, biological organisms are optimized to ‘take over’ as much as they can. It’s basically their ‘purpose’. But I don’t think this is necessarily true of an AI. There is no reason to surmise that AI creatures would be ‘interested’ in reproducing at all. I don’t know what they’d be ‘interested’ in doing. I am interested in what you think an AI would be ‘interested’ in doing, and why that is necessarily a threat to humankind that outweighs the benefits of creating a sort of benevolent God.

Answer 6:

You’re right that we need to avoid the temptation to anthropomorphize and assume that AI’s will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

Future of Life Institute Summer 2015 Newsletter

TOP DEVELOPMENTS

* $7M in AI research grants announced: We were delighted to announce the selection of 37 AI safety research teams which we plan to award a total of $7 million in funding. The grant program is funded by Elon Musk and the Open Philanthropy Project.

Max Tegmark, along with FLI grant recipients Manela Veloso and Thomas Dietterich, were interviewed on NPR’s On Point Radio for a lively discussion about our new AI safety research program.

* Open letter about autonomous weapons: FLI recently published an open letter advocating a global ban on offensive autonomous weapons development. Thousands of prominent scientists and concerned individuals are signatories, including Stephen Hawking, Elon Musk, the team at DeepMind, Yann LeCun (Director of AI Research, Facebook), Eric Horvitz (Managing Director, Microsoft Research), Noam Chomsky and Steve Wozniak.

Stuart Russell was interviewed about the letter on NPR’s All Things Considered (audio) and Al Jazeera America News(video).

* Open letter about economic impacts of AI: Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders have launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

 

EVENTS

* ITIF AI policy panel: Stuart Russell and MIRI Executive Director Nate Soares participated in a panel discussion about the risks and policy implications of AI (video here). The panel was hosted by the Information Technology & Innovation Foundation (ITIF), a Washington-based think tank focusing on the intersection of public policy & emerging technology.

* IJCAI 15: Stuart Russell presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.

* EA Global conferences: FLI co-founders Viktoriya Krakovna and Anthony Aguirre spoke at the Effective Altruism Global (EA Global) conference at Google headquarters in Mountain View, California. FLI co-founder Jaan Tallinn spoke at the EA Global Oxford conference on August 28-30.

* Stephen Hawking AMA: Professor Hawking is hosting an “Ask Me Anything” (AMA) conversation on Reddit. Users recently submitted questions here; his answers will follow in the near future.

 

OTHER UPDATES

* FLI anniversary video: FLI co-founder Meia Tegmark created an anniversary video highlighting our accomplishments from our first year.

* Future of AI FAQ: We’ve created a FAQ about the future of AI, which elaborates on the position expressed in our first open letter about AI development from January.

AI safety research on NPR

I just had the pleasure of discussing our new AI safety research program on National Public Radio. I was fortunate to be joined by two of the winners of our grants competition: CMU roboticist Manuela Veloso and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence.

And the winners are…

After a grueling expert review of almost 300 grant proposals from around the world, we are delighted to announce the 37 research teams that have been recommended for funding to help keep AI beneficial. We plan to award these teams a total of about $7M from Elon Musk and the Open Philanthropy Project over the next three years, with most of the research projects starting by September 2015. The winning teams will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

Happy Birthday, FLI!

Today we are celebrating one year since our launch event. It’s been an amazing year, full of wonderful accomplishments, and we would like to express our gratitude to all those who supported us with their advice, hard work and resources. Thank you – and let’s make this year even better!

Here’s a video with some of the highlights of our first year. You’ll find many familiar faces here, perhaps including your own!

Jaan Tallinn on existential risks

An excellent piece about existential risks by FLI co-founder Jaan Tallinn on Edge.org:

“The reasons why I’m engaged in trying to lower the existential risks has to do with the fact that I’m a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about — in the pallet of actions that you have — what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn’t make a significant difference in these areas.”

From the introduction by Max Tegmark:

“Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning.”

AI grant results

We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the response was overwhelming: about 300 applications for a total of about $100M, including a great diversity of awesome teams and projects from around the world. Thanks to hard work by a team of expert reviewers, we’ve now invited roughly the strongest quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.