Posts in this category appear in the left sidebar (column 3).

2015: An Amazing Year in Review

Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…

 

In the beginning

conference150104

Participants and attendees of the inaugural Puerto Rico conference.

2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.

At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.

 

Forging ahead

On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.

On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.

SF_event

All five FLI-founders flanked by other beneficial-AI enthusiasts. From left to right, top to bottom: Stuart Russell. Jaan Tallinn, Janos Kramar, Anthony Aguirre, Max Tegmark, Nick Bostrom, Murray Shanahan, Jesse Galef, Michael Vassar, Nate Soares, Viktoriya Krakovna, Meia Chita-Tegmark and Katja Grace

Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.

This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.

Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.

 

Other major events

Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.

Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.

In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).

military_AI

Concerns about autonomous weapons led to an open letter calling for a ban.

Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.

Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.

September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.

Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.

Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.

Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events.  The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.

 

NIPS-panel-3

Richard Mallah speaking at the third panel discussion of the NIPS symposium.

In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.

 

In the Press

We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:

The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent,  The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

You can find more media coverage of Elon Musk’s donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.

Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.

Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.

Throughout the year, Victoria was interviewed by Popular Science, Engineering and Technology Magazine, Boston Magazine and Blog Talk Radio.

Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.

In addition we had a few extra special articles on our new website:

Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove

On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.

 

A New Beginning

2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!

Happy New Year!

happy_new_year_2016

What’s so exciting about AI? Conversations at the Nobel Week Dialogue

Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.

This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence.” The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?

Although challenges in developing AI and concerns about human-computer interaction were both expressed, in the celebratory spirit of the Nobel Prize, let’s focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.

Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.

Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.

Stuart Russell, professor at UC Berkeley, highlighted AI’s potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.

As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind.”

FLI November Newsletter

News Site Launch
We are excited to present our new xrisk news site! With improved layout and design, it aims to provide you with daily technology news relevant to the long-term future of civilization, covering both opportunities and risks. This will, of course, include news about the projects we and our partner organizations are involved in to help prevent these risks. We’re also developing a section of the site that will provide more background information about the major risks, as well as what people can do to help reduce them and keep society flourishing.
Reducing Risk of Nuclear War

Some investments in nuclear weapons systems might increase the risk of accidental nuclear war and are arguably done primarily for profit rather than national security. Illuminating these financial drivers provides another opportunity to reduce the risk of nuclear war. FLI is pleased to support financial research about who invests in and profits from the production of new nuclear weapons systems, with the aim of drawing attention to and stigmatizing such productions.

On November 12, Don’t Bank on the Bomb released their 2015 report on European financial institutions that have committed to divesting in any companies related to the manufacture of nuclear weapons. The report also highlights financial groups who have made positive steps toward divestment, and it provides a detailed list of companies that are still heavily invested in nuclear weapons. With the Cold War long over, many people don’t realize that the risk of nuclear war still persists and that many experts believe it to be increasing.Here is FLI’s assessment and position of the nuclear weapons situation. 

In case you missed it…
Here are some other interesting things we and our partners have done in the last few months:
  • On September 1, FLI and CSER co-organized an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety in front of a veritable who’s who of the scientifically minded in Westminster, including many British members of parliament.
  • Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety.
  • Stephen Hawking answered the AMA questions about artificial intelligence.
  • Our co-founder, Meia Chita-Tegmark wrote a spooky Halloween op-ed that was featured on the Huffington Post about the man who saved the world from nuclear apocalypse in 1962.
  • Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars.
  • FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists.
  • And two of our partner organizations have published their newsletters. The Machine Intelligence Research Institute (MIRI) published an October and  November newsletter, and the Global Catastrophic Risk Institute released newsletters inSeptember and October.

AI safety conference in Puerto Rico

The Future of AI: Opportunities and Challenges 

This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls (see this open letter and this list of research priorities). To facilitate candid and constructive discussions, there was no media present and Chatham House Rules: nobody’s talks or statements will be shared without their permission.
Where? San Juan, Puerto Rico
When? Arrive by evening of Friday January 2, depart after lunch on Monday January 5 (see program below)

 

conference-1

Scientific organizing committee:

  • Erik Brynjolfsson, MIT, Professor at the MIT Sloan School of Management, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
  • Demis Hassabis, Founder, DeepMind
  • Eric Horvitz, Microsoft, co-chair of the AAAI presidential panel on long-term AI futures
  • Shane Legg, Founder, DeepMind
  • Peter Norvig, Google, Director of Research, co-author of the standard textbook Artificial Intelligence: a Modern Approach.
  • Francesca Rossi, Univ. Padova, Professor of Computer Science, President of the International Joint Conference on Artificial Intelligence
  • Stuart Russell, UC Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.
  • Bart Selman, Cornell University, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures
  • Murray Shanahan, Imperial College, Professor of Cognitive Robotics
  • Mustafa Suleyman, Founder, DeepMind
  • Max Tegmark, MIT, Professor of physics, author of Our Mathematical Universe

Local Organizers:
Anthony Aguirre, Meia Chita-Tegmark, Viktoriya Krakovna, Janos Kramar, Richard Mallah, Max Tegmark, Susan Young

Support: Funding and organizational support for this conference is provided by Skype-founder Jaan Tallinnthe Future of Life Institute and the Center for the Study of Existential Risk.

PROGRAM

Friday January 2:
1600-late: Registration open
1930-2130: Welcome reception (Las Olas Terrace)

Saturday January 3:
0800-0900: Breakfast
0900-1200: Overview (one review talk on each of the four conference themes)
• Welcome
• Ryan Calo (Univ. Washington): AI and the law
• Erik Brynjolfsson (MIT): AI and economics (pdf)
• Richard Sutton (Alberta): Creating human-level AI: how and when? (pdf)
• Stuart Russell (Berkeley): The long-term future of (artificial) intelligence (pdf)
1200-1300: Lunch
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Optimizing the economic impact of AI
(A typical 3-hour session consists of a few 20-minute talks followed by a discussion panel where the panelists who haven’t already given talks get to give brief introductory remarks before the general discussion ensues.)
What can we do now to maximize the chances of reaping the economic bounty from AI while minimizing unwanted side-effects on the labor market?
Speakers:
• Andrew McAfee, MIT (pdf)
• James Manyika, McKinsey (pdf)
• Michael Osborne, Oxford (pdf)
Panelists include Ajay Agrawal (Toronto), Erik Brynjolfsson (MIT), Robin Hanson (GMU), Scott Phoenix (Vicarious)
1900: dinner

Sunday January 4:
0800-0900: Breakfast
0900-1200: Creating human-level AI: how and when?
Short talks followed by panel discussion: will it happen, and if so, when? Via engineered solution, whole brain emulation, or other means? (We defer until the 4pm session questions regarding what will happen, about whether machines will have goals, about ethics, etc.)
Speakers:
• Demis Hassabis, Google/DeepMind
• Dileep George, Vicarious (pdf)
• Tom Mitchell, CMU (pdf)
Panelists include Joscha Bach (MIT), Francesca Rossi (Padova), Richard Mallah (Cambridge Semantics), Richard Sutton (Alberta)
1200-1300: Lunch
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Intelligence explosion: science or fiction?
If an intelligence explosion happens, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? Containment problem? Is “friendly AI” possible? Feasible? Likely to happen?
Speakers:
• Nick Bostrom, Oxford (pdf)
• Bart Selman, Cornell (pdf)
• Jaan Tallinn, Skype founder (pdf)
• Elon Musk, SpaceX, Tesla Motors
Panelists include Shane Legg (Google/DeepMind), Murray Shanahan (Imperial), Vernor Vinge (San Diego), Eliezer Yudkowsky (MIRI)
1930: banquet (outside by beach)

Monday January 5:
0800-0900: Breakfast
0900-1200: Law & ethics: Improving the legal framework for autonomous systems
How should legislation be improved to best protect the AI industry and consumers? If self-driving cars cut the 32000 annual US traffic fatalities in half, the car makers won’t get 16000 thank-you notes, but 16000 lawsuits. How can we ensure that autonomous systems do what we want? And who should be held liable if things go wrong? How tackle criminal AI? AI ethics? AI ethics/legal framework for military systems & financial systems?
Speakers:
• Joshua Greene, Harvard (pdf)
• Heather Roff Perkins, Univ. Denver (pdf)
• David Vladeck, Georgetown
Panelists include Ryan Calo (Univ. Washington), Tom Dietterich (Oregon State, AAAI president), Kent Walker (General Counsel, Google)
1200: Lunch, depart

PARTICIPANTS
You’ll find a list of participants and their bios here.

conference150104 hires

Back row, from left to right: Tom Mitchell, Seán Ó hÉigeartaigh, Huw Price, Shamil Chandaria, Jaan Tallinn, Stuart Russell, Bill Hibbard, Blaise Agüera y Arcas, Anders Sandberg, Daniel Dewey, Stuart Armstrong, Luke Muehlhauser, Tom Dietterich, Michael Osborne, James Manyika, Ajay Agrawal, Richard Mallah, Nancy Chang, Matthew Putman
Other standing, left to right: Marilyn Thompson, Rich Sutton, Alex Wissner-Gross, Sam Teller, Toby Ord, Joscha Bach, Katja Grace, Adrian Weller, Heather Roff-Perkins, Dileep George, Shane Legg, Demis Hassabis, Wendell Wallach, Charina Choi, Ilya Sutskever, Kent Walker, Cecilia Tilli, Nick Bostrom, Erik Brynjolfsson, Steve Crossan, Mustafa Suleyman, Scott Phoenix, Neil Jacobstein, Murray Shanahan, Robin Hanson, Francesca Rossi, Nate Soares, Elon Musk, Andrew McAfee, Bart Selman, Michele Reilly, Aaron VanDevender, Max Tegmark, Margaret Boden, Joshua Greene, Paul Christiano, Eliezer Yudkowsky, David Parkes, Laurent Orseau, JB Straubel, James Moor, Sean Legassick, Mason Hartman, Howie Lempel, David Vladeck, Jacob Steinhardt, Michael Vassar, Ryan Calo, Susan Young, Owain Evans, Riva-Melissa Tez, János Kramár, Geoff Anders, Vernor Vinge, Anthony Aguirre
Seated: Sam Harris, Tomaso Poggio, Marin Soljačić, Viktoriya Krakovna, Meia Chita-Tegmark
Behind the camera: Anthony Aguirre (and also photoshopped in by the human-level intelligence on his left)
Click  for a full resolution version.

Hawking Reddit AMA on AI

Our Scientific Advisory Board member Stephen Hawking’s long-awaited Reddit AMA answers on Artificial Intelligence just came out, and was all over today’s world news, including MSNBCHuffington PostThe Independent and Time.

Read the Q&A below and visit the official Reddit page for the full discussion:

Question 1:

Professor Hawking- Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call “The Terminator Conversation.” My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. In my opinion, this is different from “dangerous AI” as most people perceive it, in that the software has no motives, no sentience, and no evil morality, and is merely (ruthlessly) trying to optimize a function that we ourselves wrote and designed. Your viewpoints (and Elon Musk’s) are often presented by the media as a belief in “evil AI,” though of course that’s not what your signed letter says. Students that are aware of these reports challenge my view, and we always end up having a pretty enjoyable conversation. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?

Answer 1:

You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

Question 2:

Hello Doctor Hawking, thank you for doing this AMA. I am a student who has recently graduated with a degree in Artificial Intelligence and Cognitive Science. Having studied A.I., I have seen first hand the ethical issues we are having to deal with today concerning how quickly machines can learn the personal features and behaviours of people, as well as being able to identify them at frightening speeds. However, the idea of a “conscious” or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?

Answer 2:

The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

Question 3:

Hello, Prof. Hawking. Thanks for doing this AMA! Earlier this year you, Elon Musk, and many other prominent science figures signed an open letter warning the society about the potential pitfalls of Artificial Intelligence. The letter stated: “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.” While being a seemingly reasonable expectation, this statement serves as a start point for the debate around the possibility of Artificial Intelligence ever surpassing the human race in intelligence.
My questions: 1. One might think it impossible for a creature to ever acquire a higher intelligence than its creator. Do you agree? If yes, then how do you think artificial intelligence can ever pose a threat to the human race (their creators)? 2. If it was possible for artificial intelligence to surpass humans in intelligence, where would you define the line of “It’s enough”? In other words, how smart do you think the human race can make AI, while ensuring that it doesn’t surpass them in intelligence?

Answer 3:

It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

Question 4:

I’m rather late to the question-asking party, but I’ll ask anyway and hope. Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done? Thank you for your time and your contributions. I’ve found research to be a largely social endeavor, and you’ve been an inspiration to so many.

Answer 4:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Question 5:

Hello Professor Hawking, thank you for doing this AMA! I’ve thought lately about biological organisms’ will to survive and reproduce, and how that drive evolved over millions of generations. Would an AI have these basic drives, and if not, would it be a threat to humankind? Also, what are two books you think every person should read?

Answer 5:

An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

Question 6:

Thanks for doing this AMA. I am a biologist. Your fear of AI appears to stem from the assumption that AI will act like a new biological species competing for the same resources or otherwise transforming the planet in ways incompatible with human (or other) life. But the reason that biological species compete like this is because they have undergone billions of years of selection for high reproduction. Essentially, biological organisms are optimized to ‘take over’ as much as they can. It’s basically their ‘purpose’. But I don’t think this is necessarily true of an AI. There is no reason to surmise that AI creatures would be ‘interested’ in reproducing at all. I don’t know what they’d be ‘interested’ in doing. I am interested in what you think an AI would be ‘interested’ in doing, and why that is necessarily a threat to humankind that outweighs the benefits of creating a sort of benevolent God.

Answer 6:

You’re right that we need to avoid the temptation to anthropomorphize and assume that AI’s will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

Future of Life Institute Summer 2015 Newsletter

TOP DEVELOPMENTS

* $7M in AI research grants announced: We were delighted to announce the selection of 37 AI safety research teams which we plan to award a total of $7 million in funding. The grant program is funded by Elon Musk and the Open Philanthropy Project.

Max Tegmark, along with FLI grant recipients Manela Veloso and Thomas Dietterich, were interviewed on NPR’s On Point Radio for a lively discussion about our new AI safety research program.

* Open letter about autonomous weapons: FLI recently published an open letter advocating a global ban on offensive autonomous weapons development. Thousands of prominent scientists and concerned individuals are signatories, including Stephen Hawking, Elon Musk, the team at DeepMind, Yann LeCun (Director of AI Research, Facebook), Eric Horvitz (Managing Director, Microsoft Research), Noam Chomsky and Steve Wozniak.

Stuart Russell was interviewed about the letter on NPR’s All Things Considered (audio) and Al Jazeera America News(video).

* Open letter about economic impacts of AI: Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders have launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

 

EVENTS

* ITIF AI policy panel: Stuart Russell and MIRI Executive Director Nate Soares participated in a panel discussion about the risks and policy implications of AI (video here). The panel was hosted by the Information Technology & Innovation Foundation (ITIF), a Washington-based think tank focusing on the intersection of public policy & emerging technology.

* IJCAI 15: Stuart Russell presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.

* EA Global conferences: FLI co-founders Viktoriya Krakovna and Anthony Aguirre spoke at the Effective Altruism Global (EA Global) conference at Google headquarters in Mountain View, California. FLI co-founder Jaan Tallinn spoke at the EA Global Oxford conference on August 28-30.

* Stephen Hawking AMA: Professor Hawking is hosting an “Ask Me Anything” (AMA) conversation on Reddit. Users recently submitted questions here; his answers will follow in the near future.

 

OTHER UPDATES

* FLI anniversary video: FLI co-founder Meia Tegmark created an anniversary video highlighting our accomplishments from our first year.

* Future of AI FAQ: We’ve created a FAQ about the future of AI, which elaborates on the position expressed in our first open letter about AI development from January.

AI safety research on NPR

I just had the pleasure of discussing our new AI safety research program on National Public Radio. I was fortunate to be joined by two of the winners of our grants competition: CMU roboticist Manuela Veloso and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence.

Happy Birthday, FLI!

Today we are celebrating one year since our launch event. It’s been an amazing year, full of wonderful accomplishments, and we would like to express our gratitude to all those who supported us with their advice, hard work and resources. Thank you – and let’s make this year even better!

Here’s a video with some of the highlights of our first year. You’ll find many familiar faces here, perhaps including your own!

Jaan Tallinn on existential risks

An excellent piece about existential risks by FLI co-founder Jaan Tallinn on Edge.org:

“The reasons why I’m engaged in trying to lower the existential risks has to do with the fact that I’m a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about — in the pallet of actions that you have — what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn’t make a significant difference in these areas.”

From the introduction by Max Tegmark:

“Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning.”

AI grant results

We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the response was overwhelming: about 300 applications for a total of about $100M, including a great diversity of awesome teams and projects from around the world. Thanks to hard work by a team of expert reviewers, we’ve now invited roughly the strongest quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.

January 2015 Newsletter

In the News

* Top AI researchers from industry and academia have signed an FLI-organized open letter arguing for timely research to make AI more robust and beneficial. Check out our research priorities and supporters on our website.

+ The open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent , The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

* We are delighted to report that Elon Musk has donated $10 million to FLI to create a global research program aimed at keeping AI beneficial to humanity. Read more about the program on our website.

+ You can find more media coverage of the donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

—————————————————
———

Projects and Events

* FLI recently organized its first-ever conference, entitled “The Future of AI: Opportunities and Challenges.” The conference took place on January 2-5 in Puerto Rico, and brought together top AI researchers, industry leaders, and experts in economics, law, and ethics to discuss the future of AI. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Many of the speakers have posted their talks, which can be found on our website.

* The application for research funds opens Thursday, January 22. Grants are available to AI researchers and to AI-related research involving other fields such as economics, law, ethics and policy. You can find the application on our website.

—————————————————-
——–

Other Updates

* We are happy to announce Francesca Rossi has joined our scientific advisory board! Francesca Rossi is a professor of computer science, with research interests within artificial intelligence. She is the president of the International Joint Conference on Artificial Intelligence (IJCAI), as well as the associate editor in chief for the Journal of AI Research (JAIR). You can find our entire advisory board on our website.

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

Elon Musk donates $10M to our research program

We are delighted to report that Elon Musk has decided to donate $10M to FLI to run a global research program aimed at keeping AI beneficial to humanity.

You can read more about the pledge here.

A sampling of the media coverage: Fast Company, Tech Crunch, Wired (also here), Mashable, Slash Gear, BostInno, Engineering & Technology, Christian Science Monitor.

AI Conference

We organized our first conference, The Future of AI: Opportunities and Challenges, Jan 2-5 in Puerto Rico. This conference brought together many of the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Most of the speakers have posted their talks.

November 2014 Newsletter

In the News

* The winners of the essay contest we ran in partnership with the Foundational Questions Institute have been announced! Check out the awesome winning essays on the FQXi website.

* Financial Times ran a great article about artificial intelligence and the work of organizations like FLI, with thoughts from Elon Musk and Nick Bostrom.

* Stuart Russell offered a response in a featured conversation on Edge about “The Myth of AI”. Read the conversation here.

* Check out the piece in Computer World on Elon Musk and his comments on artificial intelligence.

* The New York Times featured a fantastic article about broadening perspectives on AI, featuring Nick Bostrom, Stephen Hawking, Elon Musk, and more.

* Our colleagues at the Future of Humanity Institute attended the “Biosecurity 2030” meeting in London and had this to report:

+ About 12 projects have been stopped in the U.S. following the White House moratorium on gain-of-function research.

+ One of the major H5N1 (bird flu) research groups still has not vaccinated its researchers against H5N1, even though this seems like an obvious safety protocol.

+ The bioweapons convention has no enforcement mechanism at all, and nothing comprehensive on dual-use issues.

—————

Projects and Events

* FLI advisory board member Martin Rees gave a great talk at the Harvard Kennedy School about existential risk. Check out the profile of the event in The Harvard Crimson newspaper.

—————

Other Updates

* Follow and like our social media accounts and ask us questions! We are “Future of Life Institute” on Facebook and @FLIxrisk on Twitter.

FLI launch event @ MIT

The Future of Technology: Benefits and Risks

FLI was officially launched Saturday May 24, 2014 at 7pm in MIT auditorium 10-250 – see videotranscript and photos below.

The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about what we can do now to improve the chances of reaping the benefits and avoiding the risks, moderated by Alan Alda and featuring George Church (synthetic biology), Ting Wu (personal genetics), Andrew McAfee (second machine age, economic bounty and disparity), Frank Wilczek (near-term AI and autonomous weapons) and Jaan Tallinn (long-term AI and singularity scenarios).

  • Alan Alda is an Oscar-nominated actor, writer, director, and science communicator, whose contributions range from M*A*S*H to Scientific American Frontiers.
  • George Church is a professor of genetics at Harvard Medical School, initiated the Personal Genome Project, and invented DNA array synthesizers.
  • Andrew McAfee is Associate Director of the MIT Center for Digital Business and author of the New York Times bestseller The Second Machine Age.
  • Jaan Tallinn is a founding engineer of Skype and philanthropically supports numerous research organizations aimed at reducing existential risk.
  • Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.
  • Ting Wu is a professor of Genetics at Harvard Medical School and Director of the Personal Genetics Education project.

 

Photos from the talk