Hawking AI speech

Stephen Hawking, who serves on our FLI Scientific Advisory Board, just gave an inspiring and thought-provoking talk that I think of as “A Brief History of Intelligence”. He spoke of the opportunities and challenges related to future artificial intelligence at a Google’s conference outside London, and you can watch it here.

Dubai to Employ “Fully Intelligent” Robot Police

I don’t know how seriously to take this, but Dubai is developing Robo-cops to roam public areas like malls:

“‘The robots will interact directly with people and tourists,’ [Colonel Khalid Nasser Alrazooqi] said. ‘They will include an interactive screen and microphone connected to the Dubai Police call centres. People will be able to ask questions and make complaints, but they will also have fun interacting with the robots.’

In four or five years, however, Alrazooqi said that Dubai Police will be able to field autonomous robots that require no input from human controllers.

“These will be fully intelligent robots that can interact with people, with no human intervention at all,” he said. “This is still under research and development, but we are planning on it.””

I don’t know what he means by ‘fully intelligent’ robots, but I would be surprised if anything fitting my conception of it were around in five years.

Interestingly, this sounds similar to the Knightscope K5 already in beta in the United States – a rolling, autonomous robot whose sensors try to detect suspicious activity or recognize faces and license plates of wanted criminals, and then alert authorities.

While the Knightscope version focuses on mass surveillance and data collection, Dubai is proposing that their robo-cops be able to interact with the public.

I would expect people to be more comfortable with that – especially if there’s a smooth transition from being controlled and voiced by humans to being fully autonomous.

Jaan Tallinn on existential risks

An excellent piece about existential risks by FLI co-founder Jaan Tallinn on Edge.org:

“The reasons why I’m engaged in trying to lower the existential risks has to do with the fact that I’m a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about — in the pallet of actions that you have — what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn’t make a significant difference in these areas.”

From the introduction by Max Tegmark:

“Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning.”

Recent AI discussions

1. Brookings Institution post on Understanding Artificial Intelligence, discussing technological unemployment, regulation, and other issues.

2. A recap of the Science Friday episode with Stuart Russell, Erik Horvitz and Max Tegmark.

3. Ryan Calo on What Ex Machina’s Alex Garland Gets Wrong About Artificial Intelligence.

Assorted Sunday Links #3

1. In the latest issue of Joint Force Quarterly, Randy Eshelman and Douglas Derrick call for the U.S. Department of Defense to conduct research on how “to temper goal-driven, autonomous agents with ethics.” They discuss AGI and superintelligence explicitly, citing Nick Bostrom, Eliezer Yudkowsky, and others. Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command, and Derrick is an Assistant Professor of the University of Nebraska at Omaha.

2. Seth Baum’s article ‘Winter-safe Deterrence: The Risk of Nuclear Winter and Its Challenge to Deterrence‘ appears in April’s volume of Contemporary Security Policy. “[T]his paper develops the concept of winter-safe deterrence, defined as military force capable of meeting the deterrence goals of today’s nuclear weapon states without risking catastrophic nuclear winter.”

3. James Barratt, author of Our Final Invention, posts a new piece on AI risk in the Huffington Post.

4. Robert de Neufville of the Global Catastrophic Risk Institute summarizes March’s developments in the world of catastrophic risks.

5. Take part in the vote on whether we should fear AI on the Huffington Post website, where you can side with Musk and Hawking, Neil DeGrasse Tyson, or one of FLI’s very own founders Max Tegmark!

Russell, Horvitz, and Tegmark on Science Friday: Is AI Safety a Concern?

To anyone only reading certain news articles, it might seem like the top minds in artificial intelligence disagree about whether AI safety is a concern worth studying.

But on Science Friday yesterday, guests Stuart Russell, Eric Horvitz, and Max Tegmark all emphasized how much agreement there was.

Horvitz, head of research at Microsoft, has sometimes been held up as a foil to Bill Gates’ worries about superintelligence. But he made a point to say that the reported disagreements are overblown:

“Let me say that Bill and I are close, and we recently sat together for quite a while talking about this topic. We came away from that meeting and we both said: You know, given the various stories in the press that put us at different poles of this argument (which is really over-interpretation and amplifications of some words), we both felt like we were in agreement that there needs to be attention focused on these issues. We shouldn’t just march ahead in a carefree manner. These are real interesting and challenging concerns about potential pitfalls. Yet I come away from these discussions being – as people know me – largely optimistic about the outcomes of what machine intelligence – AI – will do for humanity in the end.”

It’s good to see the public conversation moving so quickly past “Are these concerns legitimate?” and shifting toward “How should we handle these legitimate concerns?”

Click here to listen to the full episode of Science Friday.

Gates & Musk discuss AI

Bill Gates and Elon Musk recently discussed the future of AI, and Bill said he shared Elon’s safety concerns. Regarding people dismissing AI concerns, he said “How can they not see what a huge challenge this is?”. We’re honored that he also referred to our new FLI research program on beneficial AI as “absolutely fantastic”. 🙂

Here’s the video and transcript.

April 2015 Newsletter

In the News

* The MIT Technology Review recently published a compelling overview of the possibilities surrounding AI, featuring Nick Bostrom’s Superintelligence and our open letter on AI research priorities.

+ For more news on our open letter, check out a thoughtful piece in Slate written by a colleague at the Future of Humanity Institute.

* FLI co-founder Meia Chita-Tegmark wrote a piece in the Huffington Post on public perceptions of AI and what it means for AI risk and research.

* Both Microsoft founder Bill Gates and Apple co-founder Steve Wozniak have recently joined the ranks of many AI experts and expressed concern about outcomes of superintelligent AI.

——————

Projects and Events

* We received nearly 300 applications for our global research program funded by Elon Musk! Thanks to hard work by a team of expert reviewers, we have now invited roughly a quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.

* Looking for the latest in x-risk news? Check out our just-launched news site, featuring blog posts and articles written by x-risk researchers, journalists and FLI volunteers!

* On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area. We endorse the CWG statement on the Creation of Potential Pandemic Pathogens – click here

AI grant results

We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the response was overwhelming: about 300 applications for a total of about $100M, including a great diversity of awesome teams and projects from around the world. Thanks to hard work by a team of expert reviewers, we’ve now invited roughly the strongest quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.

Assorted Sunday Links #2

Some links from the last few weeks—and some from the last few days—on what’s been happening in the world of existential risk:

1. The Open Philanthropy Project, a growing philanthropic force in the field of Global Catastrophic Risks, posts a summary of their work on GCRs over the last year, and their plans for the future. “Our new goal is to be in the late stages of making at least one ‘big bet’ – a major grant ($5+ million) or full-time hire – in the next six months.”

2. Elon Musk discusses the Future of Life, artificial intelligence, and more in an hourlong interview with physicist Neil deGrasse Tyson. An entertaining listen!

3. Sam Altman, President of tech accelerator Y Combinator, joins other recent Silicon Valley gurus in sharing his concerns about the risks posed by machine intelligence. He suggests some next steps in his recent blog post on the topic.

4. Prof Bill McGuire of the UCL Hazard Research Centre discusses how we should prepare for volcanic catastrophes which could be headed our way.

5. Robert de Neufville of the Global Catastrophic Risk Institute posts his monthly GCR News Summary for February. Watch out for the recap of March coming up next week!

Wozniak concerned about AI

Steve Wozniak, without whom I wouldn’t be typing this on a Mac, has now joined the growing group of tech pioneers (most recently his erstwhile arch-rival Bill Gates) who feel that we shouldn’t dismiss concerns about future AI developments. Interestingly, he says that he had long dismissed the idea that machine intelligence might outstrip human capability within decades, but that recent progress has caused him to start considering this possibility:

http://www.afr.com/technology/apple-cofounder-st
eve-wozniak-on-the-apple-watch-electric-cars-and-the-surpass
ing-of-humanity-20150323-1m3xxk

MIRI’s New Technical Research Agenda

Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI), a research institute devoted to studying the technical challenges of ensuring desirable behavior from highly advanced AI agents, including those capable of recursive self-improvement. In this guest blog post, he delves into MIRI’s new research agenda.

MIRI’s current research agenda — summarized in “Aligning Superintelligence with Human Interests” — is focused on technical research problems that must be solved in order to eventually build smarter-than-human AI systems that are reliably aligned with human interests: How can we create an AI agent that will reliably pursue the goals it is given? How can we formally specify beneficial goals? How can we ensure that this agent will assist and cooperate with its programmers as they improve its design, given that mistakes in the initial version are inevitable?

Within this broad area of research, MIRI specializes in problems which have three properties:

(a) We focus on research questions that cannot be delegated to future human-level AI systems (HLAIs). HLAIs will have the incentives and capability to improve e.g. their own machine vision algorithms, but if an HLAI’s preferences themselves are mis-specified, in may never have an incentive to “fix” the mis-specification itself.

(b) We focus on research questions that are tractable today. In the absence of concrete HLAI designs to test and verify, research on these problems must be theoretical and exploratory, but such research should be technical whenever possible so that clear progress can be shown, e.g. by discovering unbounded formal solutions for problems we currently don’t know how to solve even given unlimited computational resources. Such exploratory work is somewhat akin to the toy models Butler Lampson used to study covert channel communication two decades before covert channels were observed in the wild, and is also somewhat akin to quantum algorithms research long before any large-scale quantum computer is built.

(c) We focus on research questions that are uncrowded. Research on e.g. formal verification for near-future AI systems already receives significant funding, whereas MIRI’s chosen research problems otherwise receive limited attention.

Example research problems we will study include:

(1) Corrigibility. How can we build an advanced agent that cooperates with what its creators regard as a corrective intervention in its design, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences?

(2) Value learning. Direct specification of broad human preferences in an advanced agents’ reward/value function is impractical. How can we build an advanced AI that will safely learn to act as intended?

Assorted Sunday Links #1

1. Robert de Neufville of the Global Catastrophic Risk Institute summarizes news from January in the world of Global Catastrophic Risks.

2. The Union of Concerned Scientists posts their nuclear threat-themed Cartoon of the Month.

3. The World Economic Forum releases their comprehensive report for 2015 of Global Risks.

4. Physics Today reports that ‘The US could save $70 billion over the next 10 years by taking “common sense” measures to trim its nuclear forces, yet still deploy the maximum number of warheads permitted under the New START Treaty, according to a new report by the Arms Control Association. Those steps include cutting the number of proposed new ballistic missile submarines to eight from 12, delaying plans to build new nuclear-capable bombers, scaling back the upgrade of a nuclear bomb, and forgoing development of a new intercontinental ballistic missile system.’

January 2015 Newsletter

In the News

* Top AI researchers from industry and academia have signed an FLI-organized open letter arguing for timely research to make AI more robust and beneficial. Check out our research priorities and supporters on our website.

+ The open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent , The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

* We are delighted to report that Elon Musk has donated $10 million to FLI to create a global research program aimed at keeping AI beneficial to humanity. Read more about the program on our website.

+ You can find more media coverage of the donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

—————————————————
———

Projects and Events

* FLI recently organized its first-ever conference, entitled “The Future of AI: Opportunities and Challenges.” The conference took place on January 2-5 in Puerto Rico, and brought together top AI researchers, industry leaders, and experts in economics, law, and ethics to discuss the future of AI. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Many of the speakers have posted their talks, which can be found on our website.

* The application for research funds opens Thursday, January 22. Grants are available to AI researchers and to AI-related research involving other fields such as economics, law, ethics and policy. You can find the application on our website.

—————————————————-
——–

Other Updates

* We are happy to announce Francesca Rossi has joined our scientific advisory board! Francesca Rossi is a professor of computer science, with research interests within artificial intelligence. She is the president of the International Joint Conference on Artificial Intelligence (IJCAI), as well as the associate editor in chief for the Journal of AI Research (JAIR). You can find our entire advisory board on our website.

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

The Future of Artificial Intelligence

Seán Ó hÉigeartaigh is the Executive Director of the Centre for the Study of Existential Risk, based at the University of Cambridge.

Artificial intelligence leaders in academia and industry, and legal, economic and risk experts worldwide recently signed an open letter calling for the robust and beneficial development of artificial intelligence. The letter follows a recent private conference organised by the Future of Life Institute and funded by FLI and CSER’s Jaan Tallinn, in which the future opportunities and societal challenges posed by artificial intelligence were explored by AI leaders and interdisciplinary researchers.

The conference resulted in a set of research priorities aimed at making progress on the technical, legal, and economic challenges posed by this rapidly developing field.

This conference, the research preceding it, and the support for the concerns raised in the letter, may make this a pivotal moment in the development of this transformative field. But why is this happening now?

Why now?

An exciting new wave of progress in artificial intelligence is happening due to the success of a set of new approaches – “hot” areas include deep learning and other statistical learning methods. Advances in related fields like probability, decision theory, neuroscience and control theory are also contributing. These have kick-started rapid improvements on problems where progress has been very slow until now: image and speech recognition, perception and movement in robotics, and performance of autonomous vehicles are just a few examples. As a result, impacts on society that seemed far away now suddenly seem pressing.

Is society ready for the opportunities – and challenges – of AI?

Artificial intelligence is a general purpose technology – one that will affect the development of a lot of different technologies. As a result, it will affect society deeply and in a lot of different ways. The near- and long-term benefits will be great – it will increase the world’s economic prosperity, and enhance our ability to make progress on many important problems. In particular, any area where progress depends on analyzing and using huge amounts of data – climate change, health research, biotechnology – could be accelerated.

However, even impacts that are positive in the long-run can cause a lot of near-term challenges. What happens when swathes of the labour market become automated?  Can our legal systems assign blame when there is an accident involving a self-driving car? Does the use of autonomous weapons in war conflict with basic human rights?

It’s no longer enough to ask “can we build it?” Now that it looks like we can, we have to ask: “How can we build it to provide most benefit? And how must we update our own systems – legal, economic, ethical – so that the transition is smooth, and we make the most of the positives while minimizing the negatives?” These questions need careful analysis, with technical AI experts, legal experts, economists, policymakers, and philosophers working together. And as this affects society at large, the public also needs to be represented in the discussions and decisions that are made.

Safe, predictable design of powerful systems

There are also deep technical challenges as these systems get more powerful and more complex. We have already seen unexpected behaviour from systems that weren’t carefully enough thought through – for example, the role of algorithms in the 2010 financial flash crash. It is essential that powerful AI systems don’t become black boxes operating in ways that we can’t entirely understand or predict. This will require better ways to make systems transparent and easier to verify, better security so that systems can’t be hacked, and a deeper understanding of logic and decision theory so that we predict the behaviour of our systems in the different situations they will act in. There are open questions to be answered: can we design these powerful systems with perfect confidence that they will always do exactly what we want them to do? And if not, how do we design them with limits that guarantee only safe actions?

Shaping the development of a transformative technology

The societal and technical challenges posed by AI are hard, and will become harder the longer we wait. They will need insights and cooperation from the best minds in computer science, but also from experts in all the domains that AI will impact. But by making progress now, we will lay the foundations we need for the bigger changes that lie ahead.

Some commentators have raised the prospect of human-level general artificial intelligence. As Stephen Hawking and others have said, this would be the most transformative and potentially risky invention in human history, and will need to be approached very carefully. Luckily, we’re decades away at least, according to most experts and surveys, and possibly even centuries. But we need that time. We need to start work on today’s challenges – how to design AI so that we can understand it and control it, and how to change our societal systems so we gain the great benefits AI offers – if we’re to be remotely ready for that. We can’t assume we’ll get it right by default.

The benefits of this technology cannot be understated. Developed correctly, AI will allow us to make better progress on the hard scientific problems we will face in coming decades, and might prove crucial to a more sustainable life for our world’s 7 billion inhabitants. It will change the world for the better – if we take the time to think and plan carefully. This is the motivation that has brought AI researchers, and experts from all the disciplines it impacts – together to sign this letter.

Elon Musk donates $10M to our research program

We are delighted to report that Elon Musk has decided to donate $10M to FLI to run a global research program aimed at keeping AI beneficial to humanity.

You can read more about the pledge here.

A sampling of the media coverage: Fast Company, Tech Crunch, Wired (also here), Mashable, Slash Gear, BostInno, Engineering & Technology, Christian Science Monitor.

AI Conference

We organized our first conference, The Future of AI: Opportunities and Challenges, Jan 2-5 in Puerto Rico. This conference brought together many of the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Most of the speakers have posted their talks.

AI Leaders Sign Open Letter

Top AI researchers from industry and academia have signed an open letter

arguing that rapid progress in AI is making it timely to research not only how to make AI more capable, but also how to make it robust and beneficial.

You can read more about the open letter here.

Sample media coverage: Popular Science, CIO, BBC, CNBC, The Independent, The Verge, ZDNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, Live Science.

November 2014 Newsletter

In the News

* The winners of the essay contest we ran in partnership with the Foundational Questions Institute have been announced! Check out the awesome winning essays on the FQXi website.

* Financial Times ran a great article about artificial intelligence and the work of organizations like FLI, with thoughts from Elon Musk and Nick Bostrom.

* Stuart Russell offered a response in a featured conversation on Edge about “The Myth of AI”. Read the conversation here.

* Check out the piece in Computer World on Elon Musk and his comments on artificial intelligence.

* The New York Times featured a fantastic article about broadening perspectives on AI, featuring Nick Bostrom, Stephen Hawking, Elon Musk, and more.

* Our colleagues at the Future of Humanity Institute attended the “Biosecurity 2030” meeting in London and had this to report:

+ About 12 projects have been stopped in the U.S. following the White House moratorium on gain-of-function research.

+ One of the major H5N1 (bird flu) research groups still has not vaccinated its researchers against H5N1, even though this seems like an obvious safety protocol.

+ The bioweapons convention has no enforcement mechanism at all, and nothing comprehensive on dual-use issues.

—————

Projects and Events

* FLI advisory board member Martin Rees gave a great talk at the Harvard Kennedy School about existential risk. Check out the profile of the event in The Harvard Crimson newspaper.

—————

Other Updates

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

FLI launch event @ MIT

The Future of Technology: Benefits and Risks

FLI was officially launched Saturday May 24, 2014 at 7pm in MIT auditorium 10-250 – see videotranscript and photos below.

The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about what we can do now to improve the chances of reaping the benefits and avoiding the risks, moderated by Alan Alda and featuring George Church (synthetic biology), Ting Wu (personal genetics), Andrew McAfee (second machine age, economic bounty and disparity), Frank Wilczek (near-term AI and autonomous weapons) and Jaan Tallinn (long-term AI and singularity scenarios).

  • Alan Alda is an Oscar-nominated actor, writer, director, and science communicator, whose contributions range from M*A*S*H to Scientific American Frontiers.
  • George Church is a professor of genetics at Harvard Medical School, initiated the Personal Genome Project, and invented DNA array synthesizers.
  • Andrew McAfee is Associate Director of the MIT Center for Digital Business and author of the New York Times bestseller The Second Machine Age.
  • Jaan Tallinn is a founding engineer of Skype and philanthropically supports numerous research organizations aimed at reducing existential risk.
  • Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.
  • Ting Wu is a professor of Genetics at Harvard Medical School and Director of the Personal Genetics Education project.

 

Photos from the talk