Project: Evaluation of Safe Development Pathways for Artificial Superintelligence
Amount Recommended: $100,000
Some experts believe that computers could eventually become a lot smarter than humans are. They call it artificial superintelligence, or ASI. If people build ASI, it could be either very good or very bad for humanity. However, ASI is not well understood, which makes it difficult for people to act to enable good ASI and avoid bad ASI. Our project studies the ways that people could build ASI in order to help people act in better ways. We will model the different steps that need to occur for people to build ASI. We will estimate how likely it is that these steps will occur, and when they might occur. We will also model the actions people can take, and we will calculate how much the actions will help. For example, governments may be able to require that ASI researchers build in safety measures. Our models will include both the government action and the ASI safety measures, to learn about how well it all works. This project is an important step towards making sure that humanity avoids bad ASI and, if it wishes, creates good ASI.
Artificial superintelligence (ASI) has been proposed to be a major transformative future technology, potentially resulting in either massive improvement in the human condition or existential catastrophe. However, the opportunities and risks remain poorly characterized and quantified. This reduces the effectiveness of efforts to steer ASI development towards beneficial outcomes and away from harmful outcomes. While deep uncertainty inevitably surrounds such a breakthrough future technology, significant progress can be made now using available information and methods. We propose to model the human process of developing ASI. ASI would ultimately be a human creation; modeling this process indicates the probability of various ASI outcomes and illuminates a range of ways to improve outcomes. We will characterize the development pathways that can result in beneficial or dangerous ASI outcomes. We will apply risk analysis and decision analysis methods to quantify opportunities and risks, and to evaluate opportunities to make ASI less risky and more beneficial. Specifically, we will use fault trees and influence diagrams to map out ASI development pathways and the influence that various actions have on these pathways. Our proposed project will produce the first-ever analysis of ASI development using rigorous risk and decision analysis methodology.
Cars drive themselves down our streets. Planes fly themselves through our skies. Medical technologies diagnose illnesses, recommend treatment plans, and save lives.
Artificially intelligent systems are already among us, and they have been for some time now. However, the world has yet to see an artificial superintelligence (ASI) — a synthetic system that has cognitive abilities which surpass our own across every relevant metric. But technology is progressing rapidly, and many AI researchers believe the era of the artificial superintelligence may be fast approaching. Once it arrives, researchers and politicians alike have no way of predicting what will happen.
Fortunately, a number of individuals are already working to ensure that the rise of this artificial superintelligence doesn’t precipitate the fall of humanity.
When Baum discusses his work, he outlines GCRI’s mission with a matter-of-fact tone that, considering the monumental nature of the project, is more than a little jarring. “All of our work is about keeping the world safe,” Baum notes, and he continues by explaining that GCRI focuses on a host of threats that put the survival of our species in peril. From climate change to nuclear war, from extraterrestrial intelligence to artificial intelligence — GCRI covers it all.
When it comes to artificial intelligence, GCRI has several initiatives. However, their main AI project, which received funding from the Future of Life Institute, centers on the risks associated with artificial superintelligences. Or, as Baum puts it, they do “risk analysis for computers taking over the world and killing everyone.” Specifically, Baum stated that GCRI is working on “developing structured risk models to help people understand what the risks might be and, also, where some of the best opportunities to reduce this risk are located.”
Unsurprisingly, the task is not an easy one.
The fundamental problem stems from the fact that, unlike more common threats, such as the risk of dying in a car accident or the risk of getting cancer, researchers working on ASI risk analysis don’t have solid case studies to use when making their models and predictions. As Baum states, “Computers have never taken over the world and killed everyone before. That means we can’t just look at the data, which is what we do for a lot of other risks. And not only has this never happened before, the technology doesn’t even exist yet. And if it is built, we’re not sure how it would be built.”
So, how can researchers determine the risks posed by an artificial superintelligence if they don’t know exactly what that intelligence will look like and they have no real data to work with?
Luckily, when it comes to artificial superintelligences, AI experts aren’t totally in the realm of the unknown. Baum asserts that there are some ideas and a bit of relevant evidence, but these things are scattered. To address this issue, Baum and his team create models. They take what information is available, structure it, and then distribute the result in an organized fashion so that researchers can better understand the topic, the various factors that may influence the outcome of the issue at hand, and ultimately have a better understanding of the various risks associated with ASI.
For example, when attempting to figure how easy is it to design an AI so that it acts safely, one of the subdetails that needs to be modeled is whether or not humans will be able to observe the AI and test it before it gets out of control. In other words, whether AI researchers can recognize that an AI has a dangerous design and shut it down. To model this scenario and determine what the risks and most likely scenarios are, Baum and his team take the available information — the perspectives and opinions of AI researchers, what is already known about AI technology and how it functions, etc. — and they model the topic by structuring the aforementioned information along with any uncertainty in the arguments or data sets.
This kind of modeling and risk analysis ultimately allows the team to better understand the scope of the issue and, by structuring the information in a clear way, advance an ongoing conversation in the superintelligence research community. The modeling doesn’t give us a complete picture of what will happen, but it does allow us to better understand the risks that we’re facing when it comes to the rise of ASI, what events and outcomes are likely, as well as the specific steps that policy makers and AI researchers should take to ensure that ASI benefits humanity.
Of course, when it comes to the risks of artificial superintelligences, whether or not we will be able to observe and test our AI is just one small part of a much larger model.
Modeling a Catastrophe
In order to understand what it would take to bring about the ASI apocalypse, and how we could possibly prevent it, Baum and his team have created a model that investigates the following questions from a number of vantage points:
Step 1: Is it possible to build an artificial superintelligence?
Step 2: Will humans build the superintelligence?
Step 3: Will humans lose control of the superintelligence?
This first half of the model is centered on the nuts and bolts of how to build an ASI. The second half of the model dives into risk analysis related to the creation of an ASI that is harmful and looks at the following:
Step 1: Will humans design an artificial superintelligence that is harmful?
Step 2: Will the superintelligence develop harmful behavior on its own?
Step 3: Is there something deterring the superintelligence from acting in a way that is harmful (such as another AI or some human action)?
Each step in this series models a number of different possibilities to reveal the various risks that we face and how significant, and probable, these threats are. Although the model is still being refined, Baum says that substantial progress has already been made. “The risk is starting to make sense. I’m starting to see exactly what it would take to see this type of catastrophe,” Baum said. Yet, he is quick to clarify that the research is still a bit too young to say much definitively, “Those of us who study superintelligence and all the risks and policy aspects of it, we’re not exactly sure what policy we would want right now. What’s happening right now is more of a general-purpose conversation on AI. It’s one that recognizes the fact that AI is more than just a technological and economic opportunity and that there are risks involved and difficult ethical issues.”
Ultimately, Baum hopes that these conversations, when coupled with the understanding that comes from the models that he is currently developing alongside his team, will allow GCRI to better prepare policy makers and scientists alike for the rise of a new kind of (super)intelligence.
This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.
Pistono, F and Yampolskiy, RV. Unethical research: How to create a malevolent artificial intelligence. 25th International Joint Conference on Artificial Intelligence (IJCAI-16), Ethics for Artificial Intelligence Workshop (AI-Ethics-2016).
Yampolskiy, RV. Taxonomy of pathways to dangerous AI. 30th AAAI Conference on Artificial Intelligence (AAAI-2016), 2nd International Workshop on AI, Ethics and Society (AI Ethics Society 2016).
Seth Baum, Anthony Barrett, and Roman Yampolskiy presented their research at the 2015 Society for Risk Analysis Annual Meeting
Seth Baum organized several informal meetings on AI safety with attendees from (among other places) CSER, FHI, MIRI, Yale, and the United Nations at the International Joint Conference on Artificial Intelligence
Ongoing Projects/Recent Progress
Seth Baum, Anthony Barrett, and Roman Yampolskiy – Goal: to apply their model to analyze a debate between Ben Goertzel and Nick Bostrom, two longstanding superintelligence thought leaders. These researchers take the perspective of an observer who is uncertain whose argument is correct and model the terms of the debate and the uncertainty about the arguments. This project advances our overall research agenda while contributing to an important current debate in the superintelligence research community.
Seth Baum, Anthony Barrett, and Roman Yampolskiy – Goal: to expand our core model and the surrounding discussion. The document containing this work is currently approximately 100 pages. These researchers plan to expand and refine this document over the duration of the project for eventual publication as a book. The book would be the first of its kind on superintelligence risk analysis, laying out their vision for the subject and their foundational research on it.
Thanks for subscribing! Please check your email for further instructions.