Wait But Why: ‘The AI Revolution’

Tim Urban of Wait But Why has an engaging two-part series on the development of superintelligent AI and the dramatic consequences it would have on humanity. Equal parts exciting and sobering, this is a perfect primer for the layperson and thorough enough to be read-worthy to acquaintances of the topic as well.

Part 1: The Road to Superintelligence

Part 2: Our Immortality or Extinction

AI Ethics in Nature

Nature just published four interesting perspectives on AI Ethics, including an article and podcast on Lethal Autonomous Weapons by Stuart Russell.

Sam Altman Investing in ‘AI Safety Research’

(image Matt Weinberger, Business Insider)

Sam Altman, head of Y Combinator, gave an interview with Mike Curtis at Airbnb’s Open Air 2015 conference and brought up (among other issues) his concerns about AI value alignment. He didn’t pull any punches:

(from the Business Insider article)

On the growing artificial-intelligence market: “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

On what Altman would do if he were President Obama: “If I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Altman shared that he recently invested in a company doing “AI safety research” to investigate the potential risks of artificial intelligence.”

$100 billion would be four orders of magnitude larger than FLI’s research grant program. If FLI were a PAC, it might be time for us to run Altman-for-president ads…

Stuart Russell on the long-term future of AI

Professor Stuart Russell recently gave a public lecture on The Long-Term Future of (Artificial) Intelligence, hosted by the Center for the Study of Existential Risk in Cambridge, UK. In this talk, he discusses key research problems in keeping future AI beneficial, such as containment and value alignment, and addresses many common misconceptions about the risks from AI.

“The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.”

 

What happens when our computers get smarter than we are?

The author of Superintelligence: Paths, Dangers, Strategies ,Nick Bostrom gave a talk at TED 2015 about artificial superintelligence (ASI):

“What happens when our computers get smarter than we are?”

From his polls of researchers he posits we will hit ASI by 2040 or 2050, this may be a dramatic change. To quote Nick:

“Now most people, when they think about what is smart and what is dumb, I think have in mind a picture roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein, or whoever your favorite guru is. But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this: AI starts out at this point here, at zero intelligence, and then, after many, many years of really hard work, maybe eventually we get to mouse-level artificial intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many, many more years of really hard work, lots of investment, maybe eventually we get to chimpanzee-level artificial intelligence. And then, after even more years of really, really hard work, we get to village idiot artificial intelligence. And a few moments later, we are beyond Ed Witten. The train doesn’t stop at Humanville Station. It’s likely, rather, to swoosh right by.”

For those who are unfamiliar with the subject he breaks the problem down in a very straightforward fashion. He lays out many of the concerns about diverging interests with the human race, and limitations in our ability to control an ASI once it has been developed. His final prescription in the talk is an optimistic one, but looking at human advances weaponizing code in the past few years, it may fall short of assuaging concerns. It is definitely worth a look, over atted.com

What AI Researchers Say About Risks from AI

As the media relentlessly focuses on the concerns of public figures like Elon Musk, Stephen Hawking and Bill Gates, you may wonder – what do AI researchers think about the risks from AI? In his informative article, Scott Alexander does a comprehensive review of the opinions of prominent AI researchers on these risks. He selected researchers to profile in his article as follows:

The criteria for my list: I’m only mentioning the most prestigious researchers, either full professors at good schools with lots of highly-cited papers, or else very-well respected scientists in industry working at big companies with good track records. They have to be involved in AI and machine learning. They have to have multiple strong statements supporting some kind of view about a near-term singularity and/or extreme risk from superintelligent AI. Some will have written papers or books about it; others will have just gone on the record saying they think it’s important and worthy of further study.

Scott’s review turns up some interesting parallels between the views of the concerned researchers and the skeptics:

When I read the articles about skeptics, I see them making two points over and over again. First, we are nowhere near human-level intelligence right now, let alone superintelligence, and there’s no obvious path to get there from here. Second, if you start demanding bans on AI research then you are an idiot.

I agree whole-heartedly with both points. So do the leaders of the AI risk movement.

[…]

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.

The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

It’s encouraging that there is less controversy than one might expect – in a nutshell, AI researchers agree that hype is bad and research into potential risks is good.

Hawking AI speech

Stephen Hawking, who serves on our FLI Scientific Advisory Board, just gave an inspiring and thought-provoking talk that I think of as “A Brief History of Intelligence”. He spoke of the opportunities and challenges related to future artificial intelligence at a Google’s conference outside London, and you can watch it here.

Dubai to Employ “Fully Intelligent” Robot Police

I don’t know how seriously to take this, but Dubai is developing Robo-cops to roam public areas like malls:

“‘The robots will interact directly with people and tourists,’ [Colonel Khalid Nasser Alrazooqi] said. ‘They will include an interactive screen and microphone connected to the Dubai Police call centres. People will be able to ask questions and make complaints, but they will also have fun interacting with the robots.’

In four or five years, however, Alrazooqi said that Dubai Police will be able to field autonomous robots that require no input from human controllers.

“These will be fully intelligent robots that can interact with people, with no human intervention at all,” he said. “This is still under research and development, but we are planning on it.””

I don’t know what he means by ‘fully intelligent’ robots, but I would be surprised if anything fitting my conception of it were around in five years.

Interestingly, this sounds similar to the Knightscope K5 already in beta in the United States – a rolling, autonomous robot whose sensors try to detect suspicious activity or recognize faces and license plates of wanted criminals, and then alert authorities.

While the Knightscope version focuses on mass surveillance and data collection, Dubai is proposing that their robo-cops be able to interact with the public.

I would expect people to be more comfortable with that – especially if there’s a smooth transition from being controlled and voiced by humans to being fully autonomous.

Jaan Tallinn on existential risks

An excellent piece about existential risks by FLI co-founder Jaan Tallinn on Edge.org:

“The reasons why I’m engaged in trying to lower the existential risks has to do with the fact that I’m a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about — in the pallet of actions that you have — what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn’t make a significant difference in these areas.”

From the introduction by Max Tegmark:

“Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning.”

Recent AI discussions

1. Brookings Institution post on Understanding Artificial Intelligence, discussing technological unemployment, regulation, and other issues.

2. A recap of the Science Friday episode with Stuart Russell, Erik Horvitz and Max Tegmark.

3. Ryan Calo on What Ex Machina’s Alex Garland Gets Wrong About Artificial Intelligence.

Assorted Sunday Links #3

1. In the latest issue of Joint Force Quarterly, Randy Eshelman and Douglas Derrick call for the U.S. Department of Defense to conduct research on how “to temper goal-driven, autonomous agents with ethics.” They discuss AGI and superintelligence explicitly, citing Nick Bostrom, Eliezer Yudkowsky, and others. Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command, and Derrick is an Assistant Professor of the University of Nebraska at Omaha.

2. Seth Baum’s article ‘Winter-safe Deterrence: The Risk of Nuclear Winter and Its Challenge to Deterrence‘ appears in April’s volume of Contemporary Security Policy. “[T]his paper develops the concept of winter-safe deterrence, defined as military force capable of meeting the deterrence goals of today’s nuclear weapon states without risking catastrophic nuclear winter.”

3. James Barratt, author of Our Final Invention, posts a new piece on AI risk in the Huffington Post.

4. Robert de Neufville of the Global Catastrophic Risk Institute summarizes March’s developments in the world of catastrophic risks.

5. Take part in the vote on whether we should fear AI on the Huffington Post website, where you can side with Musk and Hawking, Neil DeGrasse Tyson, or one of FLI’s very own founders Max Tegmark!

Russell, Horvitz, and Tegmark on Science Friday: Is AI Safety a Concern?

To anyone only reading certain news articles, it might seem like the top minds in artificial intelligence disagree about whether AI safety is a concern worth studying.

But on Science Friday yesterday, guests Stuart Russell, Eric Horvitz, and Max Tegmark all emphasized how much agreement there was.

Horvitz, head of research at Microsoft, has sometimes been held up as a foil to Bill Gates’ worries about superintelligence. But he made a point to say that the reported disagreements are overblown:

“Let me say that Bill and I are close, and we recently sat together for quite a while talking about this topic. We came away from that meeting and we both said: You know, given the various stories in the press that put us at different poles of this argument (which is really over-interpretation and amplifications of some words), we both felt like we were in agreement that there needs to be attention focused on these issues. We shouldn’t just march ahead in a carefree manner. These are real interesting and challenging concerns about potential pitfalls. Yet I come away from these discussions being – as people know me – largely optimistic about the outcomes of what machine intelligence – AI – will do for humanity in the end.”

It’s good to see the public conversation moving so quickly past “Are these concerns legitimate?” and shifting toward “How should we handle these legitimate concerns?”

Click here to listen to the full episode of Science Friday.

Gates & Musk discuss AI

Bill Gates and Elon Musk recently discussed the future of AI, and Bill said he shared Elon’s safety concerns. Regarding people dismissing AI concerns, he said “How can they not see what a huge challenge this is?”. We’re honored that he also referred to our new FLI research program on beneficial AI as “absolutely fantastic”. 🙂

Here’s the video and transcript.

April 2015 Newsletter

In the News

* The MIT Technology Review recently published a compelling overview of the possibilities surrounding AI, featuring Nick Bostrom’s Superintelligence and our open letter on AI research priorities.

+ For more news on our open letter, check out a thoughtful piece in Slate written by a colleague at the Future of Humanity Institute.

* FLI co-founder Meia Chita-Tegmark wrote a piece in the Huffington Post on public perceptions of AI and what it means for AI risk and research.

* Both Microsoft founder Bill Gates and Apple co-founder Steve Wozniak have recently joined the ranks of many AI experts and expressed concern about outcomes of superintelligent AI.

——————

Projects and Events

* We received nearly 300 applications for our global research program funded by Elon Musk! Thanks to hard work by a team of expert reviewers, we have now invited roughly a quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.

* Looking for the latest in x-risk news? Check out our just-launched news site, featuring blog posts and articles written by x-risk researchers, journalists and FLI volunteers!

* On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area. We endorse the CWG statement on the Creation of Potential Pandemic Pathogens – click here

AI grant results

We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the response was overwhelming: about 300 applications for a total of about $100M, including a great diversity of awesome teams and projects from around the world. Thanks to hard work by a team of expert reviewers, we’ve now invited roughly the strongest quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.

Assorted Sunday Links #2

Some links from the last few weeks—and some from the last few days—on what’s been happening in the world of existential risk:

1. The Open Philanthropy Project, a growing philanthropic force in the field of Global Catastrophic Risks, posts a summary of their work on GCRs over the last year, and their plans for the future. “Our new goal is to be in the late stages of making at least one ‘big bet’ – a major grant ($5+ million) or full-time hire – in the next six months.”

2. Elon Musk discusses the Future of Life, artificial intelligence, and more in an hourlong interview with physicist Neil deGrasse Tyson. An entertaining listen!

3. Sam Altman, President of tech accelerator Y Combinator, joins other recent Silicon Valley gurus in sharing his concerns about the risks posed by machine intelligence. He suggests some next steps in his recent blog post on the topic.

4. Prof Bill McGuire of the UCL Hazard Research Centre discusses how we should prepare for volcanic catastrophes which could be headed our way.

5. Robert de Neufville of the Global Catastrophic Risk Institute posts his monthly GCR News Summary for February. Watch out for the recap of March coming up next week!

Wozniak concerned about AI

Steve Wozniak, without whom I wouldn’t be typing this on a Mac, has now joined the growing group of tech pioneers (most recently his erstwhile arch-rival Bill Gates) who feel that we shouldn’t dismiss concerns about future AI developments. Interestingly, he says that he had long dismissed the idea that machine intelligence might outstrip human capability within decades, but that recent progress has caused him to start considering this possibility:

http://www.afr.com/technology/apple-cofounder-st
eve-wozniak-on-the-apple-watch-electric-cars-and-the-surpass
ing-of-humanity-20150323-1m3xxk

MIRI’s New Technical Research Agenda

Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI), a research institute devoted to studying the technical challenges of ensuring desirable behavior from highly advanced AI agents, including those capable of recursive self-improvement. In this guest blog post, he delves into MIRI’s new research agenda.

MIRI’s current research agenda — summarized in “Aligning Superintelligence with Human Interests” — is focused on technical research problems that must be solved in order to eventually build smarter-than-human AI systems that are reliably aligned with human interests: How can we create an AI agent that will reliably pursue the goals it is given? How can we formally specify beneficial goals? How can we ensure that this agent will assist and cooperate with its programmers as they improve its design, given that mistakes in the initial version are inevitable?

Within this broad area of research, MIRI specializes in problems which have three properties:

(a) We focus on research questions that cannot be delegated to future human-level AI systems (HLAIs). HLAIs will have the incentives and capability to improve e.g. their own machine vision algorithms, but if an HLAI’s preferences themselves are mis-specified, in may never have an incentive to “fix” the mis-specification itself.

(b) We focus on research questions that are tractable today. In the absence of concrete HLAI designs to test and verify, research on these problems must be theoretical and exploratory, but such research should be technical whenever possible so that clear progress can be shown, e.g. by discovering unbounded formal solutions for problems we currently don’t know how to solve even given unlimited computational resources. Such exploratory work is somewhat akin to the toy models Butler Lampson used to study covert channel communication two decades before covert channels were observed in the wild, and is also somewhat akin to quantum algorithms research long before any large-scale quantum computer is built.

(c) We focus on research questions that are uncrowded. Research on e.g. formal verification for near-future AI systems already receives significant funding, whereas MIRI’s chosen research problems otherwise receive limited attention.

Example research problems we will study include:

(1) Corrigibility. How can we build an advanced agent that cooperates with what its creators regard as a corrective intervention in its design, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences?

(2) Value learning. Direct specification of broad human preferences in an advanced agents’ reward/value function is impractical. How can we build an advanced AI that will safely learn to act as intended?

Assorted Sunday Links #1

1. Robert de Neufville of the Global Catastrophic Risk Institute summarizes news from January in the world of Global Catastrophic Risks.

2. The Union of Concerned Scientists posts their nuclear threat-themed Cartoon of the Month.

3. The World Economic Forum releases their comprehensive report for 2015 of Global Risks.

4. Physics Today reports that ‘The US could save $70 billion over the next 10 years by taking “common sense” measures to trim its nuclear forces, yet still deploy the maximum number of warheads permitted under the New START Treaty, according to a new report by the Arms Control Association. Those steps include cutting the number of proposed new ballistic missile submarines to eight from 12, delaying plans to build new nuclear-capable bombers, scaling back the upgrade of a nuclear bomb, and forgoing development of a new intercontinental ballistic missile system.’

January 2015 Newsletter

In the News

* Top AI researchers from industry and academia have signed an FLI-organized open letter arguing for timely research to make AI more robust and beneficial. Check out our research priorities and supporters on our website.

+ The open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent , The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

* We are delighted to report that Elon Musk has donated $10 million to FLI to create a global research program aimed at keeping AI beneficial to humanity. Read more about the program on our website.

+ You can find more media coverage of the donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

—————————————————
———

Projects and Events

* FLI recently organized its first-ever conference, entitled “The Future of AI: Opportunities and Challenges.” The conference took place on January 2-5 in Puerto Rico, and brought together top AI researchers, industry leaders, and experts in economics, law, and ethics to discuss the future of AI. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Many of the speakers have posted their talks, which can be found on our website.

* The application for research funds opens Thursday, January 22. Grants are available to AI researchers and to AI-related research involving other fields such as economics, law, ethics and policy. You can find the application on our website.

—————————————————-
——–

Other Updates

* We are happy to announce Francesca Rossi has joined our scientific advisory board! Francesca Rossi is a professor of computer science, with research interests within artificial intelligence. She is the president of the International Joint Conference on Artificial Intelligence (IJCAI), as well as the associate editor in chief for the Journal of AI Research (JAIR). You can find our entire advisory board on our website.

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.