April 2015 Newsletter

In the News

* The MIT Technology Review recently published a compelling overview of the possibilities surrounding AI, featuring Nick Bostrom’s Superintelligence and our open letter on AI research priorities.

+ For more news on our open letter, check out a thoughtful piece in Slate written by a colleague at the Future of Humanity Institute.

* FLI co-founder Meia Chita-Tegmark wrote a piece in the Huffington Post on public perceptions of AI and what it means for AI risk and research.

* Both Microsoft founder Bill Gates and Apple co-founder Steve Wozniak have recently joined the ranks of many AI experts and expressed concern about outcomes of superintelligent AI.

——————

Projects and Events

* We received nearly 300 applications for our global research program funded by Elon Musk! Thanks to hard work by a team of expert reviewers, we have now invited roughly a quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.

* Looking for the latest in x-risk news? Check out our just-launched news site, featuring blog posts and articles written by x-risk researchers, journalists and FLI volunteers!

* On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area. We endorse the CWG statement on the Creation of Potential Pandemic Pathogens – click here

AI grant results

We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the response was overwhelming: about 300 applications for a total of about $100M, including a great diversity of awesome teams and projects from around the world. Thanks to hard work by a team of expert reviewers, we’ve now invited roughly the strongest quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.

Assorted Sunday Links #2

Some links from the last few weeks—and some from the last few days—on what’s been happening in the world of existential risk:

1. The Open Philanthropy Project, a growing philanthropic force in the field of Global Catastrophic Risks, posts a summary of their work on GCRs over the last year, and their plans for the future. “Our new goal is to be in the late stages of making at least one ‘big bet’ – a major grant ($5+ million) or full-time hire – in the next six months.”

2. Elon Musk discusses the Future of Life, artificial intelligence, and more in an hourlong interview with physicist Neil deGrasse Tyson. An entertaining listen!

3. Sam Altman, President of tech accelerator Y Combinator, joins other recent Silicon Valley gurus in sharing his concerns about the risks posed by machine intelligence. He suggests some next steps in his recent blog post on the topic.

4. Prof Bill McGuire of the UCL Hazard Research Centre discusses how we should prepare for volcanic catastrophes which could be headed our way.

5. Robert de Neufville of the Global Catastrophic Risk Institute posts his monthly GCR News Summary for February. Watch out for the recap of March coming up next week!

Wozniak concerned about AI

Steve Wozniak, without whom I wouldn’t be typing this on a Mac, has now joined the growing group of tech pioneers (most recently his erstwhile arch-rival Bill Gates) who feel that we shouldn’t dismiss concerns about future AI developments. Interestingly, he says that he had long dismissed the idea that machine intelligence might outstrip human capability within decades, but that recent progress has caused him to start considering this possibility:

http://www.afr.com/technology/apple-cofounder-st
eve-wozniak-on-the-apple-watch-electric-cars-and-the-surpass
ing-of-humanity-20150323-1m3xxk

MIRI’s New Technical Research Agenda

Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI), a research institute devoted to studying the technical challenges of ensuring desirable behavior from highly advanced AI agents, including those capable of recursive self-improvement. In this guest blog post, he delves into MIRI’s new research agenda.

MIRI’s current research agenda — summarized in “Aligning Superintelligence with Human Interests” — is focused on technical research problems that must be solved in order to eventually build smarter-than-human AI systems that are reliably aligned with human interests: How can we create an AI agent that will reliably pursue the goals it is given? How can we formally specify beneficial goals? How can we ensure that this agent will assist and cooperate with its programmers as they improve its design, given that mistakes in the initial version are inevitable?

Within this broad area of research, MIRI specializes in problems which have three properties:

(a) We focus on research questions that cannot be delegated to future human-level AI systems (HLAIs). HLAIs will have the incentives and capability to improve e.g. their own machine vision algorithms, but if an HLAI’s preferences themselves are mis-specified, in may never have an incentive to “fix” the mis-specification itself.

(b) We focus on research questions that are tractable today. In the absence of concrete HLAI designs to test and verify, research on these problems must be theoretical and exploratory, but such research should be technical whenever possible so that clear progress can be shown, e.g. by discovering unbounded formal solutions for problems we currently don’t know how to solve even given unlimited computational resources. Such exploratory work is somewhat akin to the toy models Butler Lampson used to study covert channel communication two decades before covert channels were observed in the wild, and is also somewhat akin to quantum algorithms research long before any large-scale quantum computer is built.

(c) We focus on research questions that are uncrowded. Research on e.g. formal verification for near-future AI systems already receives significant funding, whereas MIRI’s chosen research problems otherwise receive limited attention.

Example research problems we will study include:

(1) Corrigibility. How can we build an advanced agent that cooperates with what its creators regard as a corrective intervention in its design, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences?

(2) Value learning. Direct specification of broad human preferences in an advanced agents’ reward/value function is impractical. How can we build an advanced AI that will safely learn to act as intended?

Assorted Sunday Links #1

1. Robert de Neufville of the Global Catastrophic Risk Institute summarizes news from January in the world of Global Catastrophic Risks.

2. The Union of Concerned Scientists posts their nuclear threat-themed Cartoon of the Month.

3. The World Economic Forum releases their comprehensive report for 2015 of Global Risks.

4. Physics Today reports that ‘The US could save $70 billion over the next 10 years by taking “common sense” measures to trim its nuclear forces, yet still deploy the maximum number of warheads permitted under the New START Treaty, according to a new report by the Arms Control Association. Those steps include cutting the number of proposed new ballistic missile submarines to eight from 12, delaying plans to build new nuclear-capable bombers, scaling back the upgrade of a nuclear bomb, and forgoing development of a new intercontinental ballistic missile system.’

January 2015 Newsletter

In the News

* Top AI researchers from industry and academia have signed an FLI-organized open letter arguing for timely research to make AI more robust and beneficial. Check out our research priorities and supporters on our website.

+ The open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent , The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

* We are delighted to report that Elon Musk has donated $10 million to FLI to create a global research program aimed at keeping AI beneficial to humanity. Read more about the program on our website.

+ You can find more media coverage of the donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

—————————————————
———

Projects and Events

* FLI recently organized its first-ever conference, entitled “The Future of AI: Opportunities and Challenges.” The conference took place on January 2-5 in Puerto Rico, and brought together top AI researchers, industry leaders, and experts in economics, law, and ethics to discuss the future of AI. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Many of the speakers have posted their talks, which can be found on our website.

* The application for research funds opens Thursday, January 22. Grants are available to AI researchers and to AI-related research involving other fields such as economics, law, ethics and policy. You can find the application on our website.

—————————————————-
——–

Other Updates

* We are happy to announce Francesca Rossi has joined our scientific advisory board! Francesca Rossi is a professor of computer science, with research interests within artificial intelligence. She is the president of the International Joint Conference on Artificial Intelligence (IJCAI), as well as the associate editor in chief for the Journal of AI Research (JAIR). You can find our entire advisory board on our website.

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

The Future of Artificial Intelligence

Seán Ó hÉigeartaigh is the Executive Director of the Centre for the Study of Existential Risk, based at the University of Cambridge.

Artificial intelligence leaders in academia and industry, and legal, economic and risk experts worldwide recently signed an open letter calling for the robust and beneficial development of artificial intelligence. The letter follows a recent private conference organised by the Future of Life Institute and funded by FLI and CSER’s Jaan Tallinn, in which the future opportunities and societal challenges posed by artificial intelligence were explored by AI leaders and interdisciplinary researchers.

The conference resulted in a set of research priorities aimed at making progress on the technical, legal, and economic challenges posed by this rapidly developing field.

This conference, the research preceding it, and the support for the concerns raised in the letter, may make this a pivotal moment in the development of this transformative field. But why is this happening now?

Why now?

An exciting new wave of progress in artificial intelligence is happening due to the success of a set of new approaches – “hot” areas include deep learning and other statistical learning methods. Advances in related fields like probability, decision theory, neuroscience and control theory are also contributing. These have kick-started rapid improvements on problems where progress has been very slow until now: image and speech recognition, perception and movement in robotics, and performance of autonomous vehicles are just a few examples. As a result, impacts on society that seemed far away now suddenly seem pressing.

Is society ready for the opportunities – and challenges – of AI?

Artificial intelligence is a general purpose technology – one that will affect the development of a lot of different technologies. As a result, it will affect society deeply and in a lot of different ways. The near- and long-term benefits will be great – it will increase the world’s economic prosperity, and enhance our ability to make progress on many important problems. In particular, any area where progress depends on analyzing and using huge amounts of data – climate change, health research, biotechnology – could be accelerated.

However, even impacts that are positive in the long-run can cause a lot of near-term challenges. What happens when swathes of the labour market become automated?  Can our legal systems assign blame when there is an accident involving a self-driving car? Does the use of autonomous weapons in war conflict with basic human rights?

It’s no longer enough to ask “can we build it?” Now that it looks like we can, we have to ask: “How can we build it to provide most benefit? And how must we update our own systems – legal, economic, ethical – so that the transition is smooth, and we make the most of the positives while minimizing the negatives?” These questions need careful analysis, with technical AI experts, legal experts, economists, policymakers, and philosophers working together. And as this affects society at large, the public also needs to be represented in the discussions and decisions that are made.

Safe, predictable design of powerful systems

There are also deep technical challenges as these systems get more powerful and more complex. We have already seen unexpected behaviour from systems that weren’t carefully enough thought through – for example, the role of algorithms in the 2010 financial flash crash. It is essential that powerful AI systems don’t become black boxes operating in ways that we can’t entirely understand or predict. This will require better ways to make systems transparent and easier to verify, better security so that systems can’t be hacked, and a deeper understanding of logic and decision theory so that we predict the behaviour of our systems in the different situations they will act in. There are open questions to be answered: can we design these powerful systems with perfect confidence that they will always do exactly what we want them to do? And if not, how do we design them with limits that guarantee only safe actions?

Shaping the development of a transformative technology

The societal and technical challenges posed by AI are hard, and will become harder the longer we wait. They will need insights and cooperation from the best minds in computer science, but also from experts in all the domains that AI will impact. But by making progress now, we will lay the foundations we need for the bigger changes that lie ahead.

Some commentators have raised the prospect of human-level general artificial intelligence. As Stephen Hawking and others have said, this would be the most transformative and potentially risky invention in human history, and will need to be approached very carefully. Luckily, we’re decades away at least, according to most experts and surveys, and possibly even centuries. But we need that time. We need to start work on today’s challenges – how to design AI so that we can understand it and control it, and how to change our societal systems so we gain the great benefits AI offers – if we’re to be remotely ready for that. We can’t assume we’ll get it right by default.

The benefits of this technology cannot be understated. Developed correctly, AI will allow us to make better progress on the hard scientific problems we will face in coming decades, and might prove crucial to a more sustainable life for our world’s 7 billion inhabitants. It will change the world for the better – if we take the time to think and plan carefully. This is the motivation that has brought AI researchers, and experts from all the disciplines it impacts – together to sign this letter.

Feeding Everyone No Matter What

Dr David Denkenberger is a research associate at the Global Catastrophic Risk Institute, and is the co-author of Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, published this year by Academic Press. In a guest post for the FLI blog, he summarizes the motivation for, and results behind, his work.

Mass human starvation is currently likely if global agricultural production is dramatically reduced for several years following a global catastrophe: e.g. super volcanic eruption, asteroid or comet impact, nuclear winter, abrupt climate change, super weed, super crop pathogen, super bacterium, or super crop pest. Even worse, such a catastrophe may cause the collapse of civilization, and recovery is not guaranteed. Therefore, this could affect many future generations.

The primary historic solution developed over the last several decades is increased food storage. However, storing up enough food to feed everyone would take a significant amount of time and would increase the price of food, killing additional people due to inadequate global access to affordable food. Humanity is far from doomed, however, in these situations – there are solutions.

In our new book Feeding Everyone No Matter What, we present a scientific approach to the practicalities of planning for long-term interruption to food production. The book provides an order of magnitude technical analysis comparing food requirements of all humans for five years with conversion of existing vegetation and fossil fuels to edible food. It presents mechanisms for global-scale conversion including: natural gas-digesting bacteria, extracting food from leaves, and conversion of fiber by enzymes, mushroom or bacteria growth, or a two-step process involving partial decomposition of fiber by fungi and/or bacteria and feeding them to animals such as beetles, ruminants (cows, deer, etc), rats and chickens. It includes an analysis to determine the ramp rates for each option and the results show that careful planning and global cooperation could feed everyone and preserve the bulk of biodiversity even in the most extreme circumstances.

The book also discusses options that may work on the household level. It encourages scientists and laypeople to perform alternate food growing and eating experiments, and to allow everyone to learn from them on http://www.appropedia.org/Feeding_Everyone_No_Matter_What.

feeding_everyone_no_matter_what

January 2015 Newsletter

In the News

* Top AI researchers from industry and academia have signed an FLI-organized open letter arguing for timely research to make AI more robust and beneficial. Check out our research priorities and supporters on our website.

+ The open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent , The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

* We are delighted to report that Elon Musk has donated $10 million to FLI to create a global research program aimed at keeping AI beneficial to humanity. Read more about the program on our website.

+ You can find more media coverage of the donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

—————————————————
———

Projects and Events

* FLI recently organized its first-ever conference, entitled “The Future of AI: Opportunities and Challenges.” The conference took place on January 2-5 in Puerto Rico, and brought together top AI researchers, industry leaders, and experts in economics, law, and ethics to discuss the future of AI. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Many of the speakers have posted their talks, which can be found on our website.

* The application for research funds opens Thursday, January 22. Grants are available to AI researchers and to AI-related research involving other fields such as economics, law, ethics and policy. You can find the application on our website.

—————————————————-
——–

Other Updates

* We are happy to announce Francesca Rossi has joined our scientific advisory board! Francesca Rossi is a professor of computer science, with research interests within artificial intelligence. She is the president of the International Joint Conference on Artificial Intelligence (IJCAI), as well as the associate editor in chief for the Journal of AI Research (JAIR). You can find our entire advisory board on our website.

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

Elon Musk donates $10M to our research program

We are delighted to report that Elon Musk has decided to donate $10M to FLI to run a global research program aimed at keeping AI beneficial to humanity.

You can read more about the pledge here.

A sampling of the media coverage: Fast Company, Tech Crunch, Wired (also here), Mashable, Slash Gear, BostInno, Engineering & Technology, Christian Science Monitor.

AI Conference

We organized our first conference, The Future of AI: Opportunities and Challenges, Jan 2-5 in Puerto Rico. This conference brought together many of the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Most of the speakers have posted their talks.

AI Leaders Sign Open Letter

Top AI researchers from industry and academia have signed an open letter

arguing that rapid progress in AI is making it timely to research not only how to make AI more capable, but also how to make it robust and beneficial.

You can read more about the open letter here.

Sample media coverage: Popular Science, CIO, BBC, CNBC, The Independent, The Verge, ZDNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, Live Science.

November 2014 Newsletter

In the News

* The winners of the essay contest we ran in partnership with the Foundational Questions Institute have been announced! Check out the awesome winning essays on the FQXi website.

* Financial Times ran a great article about artificial intelligence and the work of organizations like FLI, with thoughts from Elon Musk and Nick Bostrom.

* Stuart Russell offered a response in a featured conversation on Edge about “The Myth of AI”. Read the conversation here.

* Check out the piece in Computer World on Elon Musk and his comments on artificial intelligence.

* The New York Times featured a fantastic article about broadening perspectives on AI, featuring Nick Bostrom, Stephen Hawking, Elon Musk, and more.

* Our colleagues at the Future of Humanity Institute attended the “Biosecurity 2030” meeting in London and had this to report:

+ About 12 projects have been stopped in the U.S. following the White House moratorium on gain-of-function research.

+ One of the major H5N1 (bird flu) research groups still has not vaccinated its researchers against H5N1, even though this seems like an obvious safety protocol.

+ The bioweapons convention has no enforcement mechanism at all, and nothing comprehensive on dual-use issues.

—————

Projects and Events

* FLI advisory board member Martin Rees gave a great talk at the Harvard Kennedy School about existential risk. Check out the profile of the event in The Harvard Crimson newspaper.

—————

Other Updates

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

November 2014 Newsletter

In the News

* The winners of the essay contest we ran in partnership with the Foundational Questions Institute have been announced! Check out the awesome winning essays on the FQXi website.

* Financial Times ran a great article about artificial intelligence and the work of organizations like FLI, with thoughts from Elon Musk and Nick Bostrom.

* Stuart Russell offered a response in a featured conversation on Edge about “The Myth of AI”. Read the conversation here.

* Check out the piece in Computer World on Elon Musk and his comments on artificial intelligence.

* The New York Times featured a fantastic article about broadening perspectives on AI, featuring Nick Bostrom, Stephen Hawking, Elon Musk, and more.

* Our colleagues at the Future of Humanity Institute attended the “Biosecurity 2030” meeting in London and had this to report:

+ About 12 projects have been stopped in the U.S. following the White House moratorium on gain-of-function research.

+ One of the major H5N1 (bird flu) research groups still has not vaccinated its researchers against H5N1, even though this seems like an obvious safety protocol.

+ The bioweapons convention has no enforcement mechanism at all, and nothing comprehensive on dual-use issues.

—————

Projects and Events

* FLI advisory board member Martin Rees gave a great talk at the Harvard Kennedy School about existential risk. Check out the profile of the event in The Harvard Crimson newspaper.

—————

Other Updates

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

Martin Rees: Catastrophic Risks: The Downsides of Advancing Technology

This event was held Thursday, November 6, 2014 in Harvard auditorium Jefferson Hall 250.

Our Earth is 45 million centuries old. But this century is the first when one species ours can determine the biosphere’s fate. Threats from the collective “footprint” of 9 billion people seeking food, resources and energy are widely discussed. But less well studied is the potential vulnerability of our globally-linked society to the unintended consequences of powerful technologies not only nuclear, but (even more) biotech, advanced AI, geo-engineering and so forth. More information here.

Nick Bostrom: Superintelligence — Paths, Dangers, Strategies

This event was held Thursday, September 4th, 2014 in Harvard auditorium Emerson 105.

What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? In his new book – Superintelligence: Paths, Dangers, Strategies – Professor Bostrom explores these questions, laying the foundation for understanding the future of humanity and intelligent life.

Photos from the talk

Max Tegmark: “Ask Max Anything” on Reddit

 This event was held Wednesday, August 20th, 2014, in the “IAmA” subreddit on reddit.com. Read it here!

Max Tegmark answers the questions of reddit.com’s user base! Questions are on the subject of his book “Our Mathematical Universe”, physics, x-risks, AI safety, and AI research.

FLI launch event @ MIT

The Future of Technology: Benefits and Risks

FLI was officially launched Saturday May 24, 2014 at 7pm in MIT auditorium 10-250 – see videotranscript and photos below.

The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about what we can do now to improve the chances of reaping the benefits and avoiding the risks, moderated by Alan Alda and featuring George Church (synthetic biology), Ting Wu (personal genetics), Andrew McAfee (second machine age, economic bounty and disparity), Frank Wilczek (near-term AI and autonomous weapons) and Jaan Tallinn (long-term AI and singularity scenarios).

  • Alan Alda is an Oscar-nominated actor, writer, director, and science communicator, whose contributions range from M*A*S*H to Scientific American Frontiers.
  • George Church is a professor of genetics at Harvard Medical School, initiated the Personal Genome Project, and invented DNA array synthesizers.
  • Andrew McAfee is Associate Director of the MIT Center for Digital Business and author of the New York Times bestseller The Second Machine Age.
  • Jaan Tallinn is a founding engineer of Skype and philanthropically supports numerous research organizations aimed at reducing existential risk.
  • Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.
  • Ting Wu is a professor of Genetics at Harvard Medical School and Director of the Personal Genetics Education project.

 

Photos from the talk

 

The Future of Technology: Benefits and Risks

This event was held Saturday May 24, 2014 at 7pm in MIT auditorium 10-250 – see videotranscript and photos below.

The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about what we can do now to improve the chances of reaping the benefits and avoiding the risks, moderated by Alan Alda and featuring George Church (synthetic biology), Ting Wu (personal genetics), Andrew McAfee(second machine age, economic bounty and disparity), Frank Wilczek (near-term AI and autonomous weapons) and Jaan Tallinn (long-term AI and singularity scenarios).

  • Alan Alda is an Oscar-nominated actor, writer, director, and science communicator, whose contributions range from M*A*S*H to Scientific American Frontiers.
  • George Church is a professor of genetics at Harvard Medical School, initiated the Personal Genome Project, and invented DNA array synthesizers.
  • Andrew McAfee is Associate Director of the MIT Center for Digital Business and author of the New York Times bestseller The Second Machine Age.
  • Jaan Tallinn is a founding engineer of Skype and philanthropically supports numerous research organizations aimed at reducing existential risk.
  • Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.
  • Ting Wu is a professor of Genetics at Harvard Medical School and Director of the Personal Genetics Education project.
Photos from the talk