Skip to content
All Podcast Episodes

Podcast: Balancing the Risks of Future Technologies with Andrew Maynard and Jack Stilgoe

Published
November 29, 2017

What does it means for technology to “get it right,” and why do tech companies ignore long-term risks in their research? How can we balance near-term and long-term AI risks? And as tech companies become increasingly powerful, how can we ensure that the public has a say in determining our collective future?

To discuss how we can best prepare for societal risks, Ariel spoke with Andrew Maynard and Jack Stilgoe on this month’s podcast. Andrew directs the Risk Innovation Lab in the Arizona State University School for the Future of Innovation in Society, where his work focuses on exploring how emerging and converging technologies can be developed and used responsibly within an increasingly complex world. Jack is a senior lecturer in science and technology studies at University College London where he works on science and innovation policy with a particular interest in emerging technologies.

The following transcript has been edited for brevity.

Transcript

Ariel: I'm Ariel Conn with the Future of Life Institute. Whenever people ask me what FLI does, I explain that we try to mitigate existential risks. That is, we're basically trying to make sure that society doesn't accidentally kill itself with technology. Almost to a person, the response is, "Oh, I'm glad someone is working on that." But that seems to be about where the agreement on risk ends. Once we start getting into the details of what efforts should be made, who should do the work, how much money should be spent, suddenly people begin to develop very different opinions about how risky something is and what we should do about it. Some of the most intense debates can even come from people who agree on the ultimate risks, but not on the means of alleviating the threat.

To talk about why this happens and what, if anything, we can do to get people in more agreement, I have with me Andrew Maynard and Jack Stilgoe. Andrew directs the Risk Innovation Lab in the Arizona State University School for the Future of Innovation in Society. He's a physicist by training, and, in his words, spent more years than he cares to remember studying airborne particles. These days his work focuses on exploring how emerging and converging technologies can be developed and used responsibly within an increasingly complex world.

Jack is a senior lecturer in science and technology studies at University College London where he works on science and innovation policy with a particular interest in emerging technologies.

Jack and Andrew, thank you so much for joining us today.

Andrew: Great to be here.

Jack: Great to be here.

Ariel: Before we get into anything else, I was hoping you both could just first talk about how you define what risk is. I think the word means something very different for a first time parent versus a scientist who's developing some life-saving medical breakthrough that could have some negative side effects, versus a rock climber. So if you could just explain how you think of risk.

Andrew: Let me dive in first, seeing that not only have I studied airborne particles for more years than I care to remember, but I've also taught graduate and undergraduate students about risk for more years than I care to remember.

So the official definition of risk is it looks at the potential of something to cause harm, but it also looks at the probability. So typically, say you're looking at an exposure to a chemical, risk is all about the hazardous nature of that chemical, its potential to cause some sort of damage to the environment or the human body, but then exposure that translates that potential into some sort of probability. That is typically how we think about risk when we're looking at regulating things.

I actually think about risk slightly differently, because that concept of risk runs out of steam really fast, especially when you're dealing with uncertainties, existential risk, and perceptions about risk when people are trying to make hard decisions and they can't work out how to make sense of the information they're getting. So I tend to think of risk as a threat to something that's important or a threat to something of value. That thing of value might be your health, it might be the environment; but it might be your job, it might be your sense of purpose or your sense of identity or your beliefs or your religion or your politics or your worldview.

As soon as we start thinking about risk in that sense, it becomes much broader, much more complex, but it also allows us to explore that intersection between different communities and their different ideas about what's important and what's worth protecting.

Ariel: Jack, did you want to add anything to that?

Jack: I have very little to add to what Andrew just said, which was a beautiful discussion of the conventional definition of risk. I would draw attention to all of those things that are incalculable. When we are dealing with new technologies, they are often things to which we cannot assign probabilities and we don't know very much about what the likely outcomes are going to be.

I think there is also a question of what isn't captured when we talk about risk. So it's clear to me that when we talk about what technology does in the world, that not all of the impacts of technology might be considered risk impacts. So as well as the risks that it is impossible for us to be able to calculate, and when we have new technologies, we typically know almost nothing about either the probabilities of things happening or the range of possible outcomes.

I'd say that we should also pay attention to all the things that are not to do with technology going wrong, but are also to do with technology going right. So technologies don't just create new risks, they also benefit some people more than others. And they can create huge inequalities. If they're governed well, they can also help close inequalities. But if we just focus on risk, then we lose some of those other concerns as well.

Andrew: Jack, so this obviously really interests me and my work because to me an inequality is a threat to something that's important to someone. Do you have any specific examples of what you think about when you think about inequalities or equality gaps?

Jack: Well, I think before we get into examples, the important thing is to bear in mind a trend with technology, which is that technology tends to benefit the powerful. That's an overall trend before we talk about any specifics, which quite often goes against the rhetoric of technological change, because, often, technologies are sold as being emancipatory and helping the worst off in society – which they do, but typically they also help the better off even more. So there's that general question.

I think in the specific, we can talk about what sorts of technologies do close inequities and which tend to exacerbate inequities. But it seems to me that just defining that as a social risk isn't quite getting there.

Ariel: This moves into my next question, because I would consider increasing inequality to be a risk. So can you guys talk about why it's so hard to get agreement on what we actually define as a risk?

Andrew: One of the things that I find is people very quickly slip into defining risk in very convenient ways. So if you have a company or an organization that really wants to do something – and that doing something may be all the way from making a bucket load of money to changing the world in the ways they think are good – there's a tendency for them to define risk in ways that benefit them.

So, for instance, I'm going to use a hypothetical, but if you are the maker of an incredibly expensive drug, and you work out that that drug is going to be beneficial in certain ways with minimal side effects, but it's only going to be available to a very few very rich number of people, you will easily define risk in terms of the things that your drug does not do, so you can claim with confidence that this is a risk-free or a low-risk product. But that's an approach where you work out where the big risks are with your product and you bury them and you focus on the things where you think there is not a risk with your product.

That sort of extends across many, many different areas – this tendency to bury the big risks associated with a new technology and highlight the low risks to make your tech look much better than it is so you can reach the aims that you're trying to achieve.

Jack: I quite agree, Andrew. I think what tends to happen is that the definition of risk, if you like, gets socialized as being that stuff that society's allowed to think about whereas the benefits are sort of privatized. The innovators are there to define who benefits and in what ways.

Andrew: I would agree. Though it also gets quite complex in terms of the social dialogue around that and who actually is part of those conversations and who has a say in those conversations.

So to get back to your point, Ariel, I think there are a lot of organizations and individuals that want to do what they think is the right thing. But they also want the ability to decide for themselves what the right thing is rather than listening to other people.

Ariel: How do we address that?

Andrew: It's a knotty problem, and it has its roots in how we are as people and as a society, how we've evolved. I think there are a number of ways forwards towards beginning to sort of pick apart the problem. A lot of those are associated with work that is carried out in the social sciences and even the humanities around how do you make these processes more inclusive, how do you bring more people to the table, how do you begin listening to different perspectives, different sets of values and incorporating them into decisions rather than marginalizing groups that are inconvenient.

Jack: I think that's right. I mean, it's ultimately if you regard these things as legitimately political discussions rather than just technical discussions, then the solution is to democratize them and to try to wrest control over the direction of technology away from just the innovators and to see that as the subject of proper democratic conversation.

Andrew: And there are some very practical things here. This is where Jack and I might actually diverge in our perspectives. But from a purely business sense, if you're trying to develop a new product or a new technology and get it to market, the last thing you can afford to do is ignore the nature of the population, the society that you're trying to put that technology into. Because if you do, you're going to run up against roadblocks where people decide they either don't like the tech or they don't like the way that you've made decisions around the technology or they don't like the way that you've implemented it.

So from a business perspective, taking a long-term strategy, it makes far more sense to engage with these different communities and develop a dialogue around them so you understand the nature of the landscape that you're developing a technology into. You can see ways of partnering with communities to make sure that that technology really does have a broad beneficial impact.

Ariel: Why do you think companies resist doing that? Is it just effort or are there other reasons that they would resist?

Andrew: I think we've had decades, centuries of training that says you don't ask awkward questions because they potentially lead to you not being able to do what you want to do. So it's partly the mindset or the mentality around innovation. But, also, it's hard work. It takes a lot of effort, and it actually takes quite a lot of humility as well.

Jack: There's also a dynamic, which is that there's a sort of well-defined law in technological change, which is that we overestimate the effect of technology in the short term and underestimate the effect of technology in the long term. Given that companies and innovators have to make short time horizon decisions, often they don't have the capacity to take on board these big world-changing implications of technology.

So if you look at something like the motorcar, right, it would have been inconceivable for Henry Ford to have imagined the world in which his technology would exist in 50 years time. Even though we know that the motorcar has led to the reshaping of large parts of America. It's led to an absolutely catastrophic level of public health risk while also bringing about clear benefits of mobility. But those are big long-term changes that evolve very slowly, far slower than any company could appreciate.

Andrew: So can I play devil's advocate here, Jack, and ask you a question, which I'm sure you must have been asked before. With hindsight should Henry Ford have developed his production line process differently to avoid some of the risks we now see or some of the impacts we now see of motor vehicles?

Jack: Well, I think, you're right to say with hindsight it's really hard to see what he might have done differently, because the point is the changes that I was talking about are systemic ones with responsibility shared across large parts of the system. Now, could we have done better at anticipating some of those things? Yes, I think we could have done, and I think had motorcar manufacturers talked to regulators and civil society at the time, they could have anticipated some of those things because there are also barriers that stop innovators from anticipating. There are actually things that force innovators time horizons to narrow.

Andrew: Yeah. So, actually, that's one of the points that really interests me. It's not this case of “do we, don't we” with a certain technology, but could we do things better so we see more longer-term benefits and we see fewer hurdles that maybe we could have avoided if we had been a little smarter from the get go.

Ariel: How well do you think we can really anticipate that though? When you say be a little smarter from the get go, I'm sure there's definitely things that we can always do that's smarter. But how much do you think we can actually anticipate?

Andrew: Well, so the basic answer is very, very little indeed. The one thing that we know about anticipating the future is that we're always going to get it wrong. But I think that we can put plausible bounds around likely things that are going to happen. So simply from what we know about how people make decisions and the evidence around that, we know that if you ignore certain pieces of information, certain evidence, you're going to make worse decisions in terms of projecting or predicting future pathways than if you're actually open to evaluating different types of evidence.

By evidence, I'm not just meaning the scientific evidence, but I'm also thinking about what people believe or hold as valuable within society and what motivates them to do certain things and react in certain ways. All of that is important evidence in terms of getting a sense of what the boundaries are of a future trajectory.

Jack: We should remember, Andrew, that the job of anticipation is not to try to get things right or wrong. So, yes, we will always get our predictions wrong, but if anticipation is about preparing us for the future rather than predicting the future, then rightness or wrongness isn't really the target. Instead, I would draw attention to the history of cases in which there has been willful ignorance of particular perspectives or particular evidence that has only been realized later – which, as you know better than anybody, the evidence of public health risk that has been swept under the carpet. We have to look first at the sort of incentives that prompt innovators to overlook that evidence.

Andrew: Yeah. I think that's so important. So it's worthwhile bringing up the Late lessons from early warnings report that came out of Europe a few years ago, which were a series of case studies of previous technological innovations over the last 100 years or so, looking at where innovators and companies and even regulators either missed important early warnings or, as you said, willfully ignored them and that led to far greater adverse impacts than there really should have been. I think there are a lot of lessons to be learned from those in terms of how we avoid those earlier mistakes.

Ariel: So I'd like to take that and move into some more specific examples now. Jack, I know you're interested in self-driving vehicles. That was a topic that came up on the last podcast. We had a couple psychologists talking about things like the trolley problem, and I know that's a touchy subject in the auto industry. I was curious, how do we start applying that to these new technologies that will probably be, literally, on the road soon?

Jack: Well, my own sense is that when it comes to self driving cars, it is, as Andrew was saying earlier, it's extremely convenient for innovators to define risks in particular ways that suit their own ambitions. I think you see this in the way that the self-driving cars debate is playing out. In part, that's because the debate is a largely American one and it emanates from an American car culture.

Here in Europe, we see a very different approach to transport with a very different emerging debate. So the trolley problem, the classic example of a risk issue where engineers very conveniently are able to treat it as an algorithmic challenge. How do we maximize public benefits and reduce public risk? Here in Europe where our transport systems are complicated, multimodal; where our cities are complicated, messy things, the self-driving car risks start to expand pretty substantially in all sorts of dimensions.

So the sorts of concerns that I would see for the future of self-driving cars relate more to what are sometimes called second order consequences. What sorts of worlds are these technologies likely to enable? What sorts of opportunities are they likely to constrain? I think that's a far more important debate than the debate about how many lives a self-driving car will either save or take in its algorithmic decision-making.

Andrew: So I think, Jack, you have referred to the trolley problem as trolleys and follies. One of the things I really grapple with, and I think it's very similar to what you were saying, is that the trolley problem seems to be a false or a misleading articulation of risk. It's something which is philosophical and hypothetical, but actually doesn't seem to bear much relation to the very real challenges and opportunities that we're grappling with with these technologies.

Jack: Yeah. I think that's absolutely right. It's an extremely convenient issue for engineers and philosophers to talk amongst themselves with. But what it doesn't get is any form of democratization of a self-driving future, which I guess is my interest.

Andrew: Yes. Now, of course, the really interesting thing here is, and we've talked about this, I get really excited about the self-driving vehicle technologies, partly living here in Tempe where Google and Uber and various other companies are testing them on the road now. But you have quite a different perspective in terms of how fast we're going with the technology and how little thought there is into the longer term sort of social dynamic and consequences. But to put my full cards on the table, I can't wait for better technologies in this area.

Jack: Well, without wishing to be too congenial, I am also excited about the potential for the technology. But what I know about past technology suggests that it may well end up gloriously suboptimal. Right? I'm interested in a future involving self-driving cars that might actually realize some of the enormous benefits here. The enormous benefits to, for example, bring accessibility to people who currently can't drive. The enormous benefits to public safety, to congestion, but making that work will not just involve a repetition of current dynamics of technological change. I think current ownership models in the US, current modes of transport in the US just are not conducive to making that happen. So I would love to see governments taking control of this and actually making it work in the same way as in the past, governments have taken control of transport and built public value transport systems out of them.

Ariel: Yeah. If governments are taking control of this and they're having it done right, what does that mean to have this developed the right way that we're not seeing right now with the manufacturers?

Jack: The first thing that I don't see any of within the self-driving car debate, because I just think we're at too early a stage, is an articulation of what we want from self-driving cars. We have the Google vision, the Waymo vision of the benefits of self-driving cars, which is largely about public safety. Fine. But no consideration of what it would take to get that right. I think that's going to look very different. I think to an extent Tempe is an easy case, because the roads in Arizona are extremely well organized. It's sunny, pedestrians behave themselves. But what you're not going to be able to do is take that technology and transport it to central London and expect it to do the same job.

So some understanding of desirable systems across different places is really important. That, I'm afraid, does mean sharing control between the innovators and the people who have responsibility for public safety and for public transport and for public space.

Andrew: So, to me, this is really important, because even though most people in this field and other similar fields are doing it for what they claim is to be for future benefits and the public good, there's a huge gap between good intentions of doing the right thing and actually being able to achieve something positive for society. I think the danger is that good intentions go bad very fast if you don't have the right processes and structures in place to translate them into something that benefits society. To do that, you've got to have partnerships and engagement with agencies and authorities that have oversight over these technologies, but also the communities and the people that are either going to be impacted by them or benefit by them.

Jack: I think that's right. I think just letting the benefits as stated by the innovators speak for themselves hasn't worked in the past, and it won't work here. Right? We have to allow some sort of democratic discussion about that.

Ariel: All right. So we've been talking about some technology that, I think, most people think it's probably coming pretty soon. Certainly, we're already starting to see testing of autonomous vehicles on the road and what not. I want to move forward in the future to more advanced technology, looking at more advanced artificial intelligence, maybe even super intelligence. How do we address risks that are associated with that when a large number of researchers don't even think this technology can be developed, if it is developed it's still hundreds of years away? How do you address these really, really big unknowns and uncertainties?

Andrew: That's a huge question. So I'm speaking here as something of a cynic of some of the projections of superintelligence. I think you've got to develop a balance between near and mid-term risks, but at the same time, work out how you take early action on trajectories so you're less likely to see the emergence of those longer-term existential risks. One of the things that actually really concerns me here is if you become too focused on some of the highly speculative existential risks, you end up missing things which could be catastrophic in a smaller sense in the near to mid-term.

So, for instance, pouring millions upon millions of dollars into solving a hypothetical problem around superintelligence and the threat to humanity sometime in the future, at the expense of looking at nearer-term things such as algorithmic bias, such as autonomous decision-making that cuts people out of the loop and a whole number of other things, is a risk balance that doesn't make sense to me. Somehow, you've got to deal with these emerging issues, but in a way which is sophisticated enough that you're not setting yourself up for problems in the future.

Jack: I completely agree, Andrew. I think getting that balance right is crucial. I agree with your assessment that that balance is far too much, at the moment, in the direction of the speculative and long-term. One of the reasons why it is, is because that's an extremely interesting set of engineering challenges. So I think the question would be on whose shoulders does the responsibility lie for acting once you recognize threats or risks like that? Typically, what you find when a community of scientists gathers to assess risks is that they frame the issue in ways that lead to scientific or technical solutions. It's telling, I think, that in the discussion about superintelligence, the answer, either in the foreground or in the background, is normally more AI not less AI. And the answer is normally to be delivered by engineers rather than to be governed by politicians.

That said, I think there's sort of cause for optimism if you look at the recent campaign around autonomous weapons in that that would seem to be a clear recognition of a technologically mediated issue where the necessary action is not on the part of the innovators themselves but on all the people who are in control of our armed forces.

Andrew: So one of the challenges here, I think, is one of control. I think you're exactly right, Jack. I should clarify that even though there is a lot of discussion around speculative existential risks, there is also a lot of action on nearer-term issues such as the lethal autonomous weapons. But one of the things that I've been particularly struck with in conversations is the fear amongst technologists in particular of losing control over the technology and the narrative. So I've had conversations where people have said that they're really worried about the potential down sides, the potential risks of where artificial intelligence is going. But they're convinced that they can solve those problems without telling anybody else about them, and they're scared that if they tell a broad public about those risks that they'll be inhibited in doing the research and the development that they really want to do.

That really comes down to control, not wanting to relinquish control over what you want to do with technology. But I think that there has got to be some relinquishment there if we're going to have responsible development of these technologies that really focuses on how they could impact people both in the short as well as the long-term and how as a society we find pathways forwards.

Ariel: Andrew, I'm really glad you brought that up. That's one that I'm not convinced by, this idea that if we tell the public what the risks are then suddenly the researchers won't be able to do the research they want. Do you see that as a real risk for researchers or do you think that's a little…

Andrew: So I think there is a risk there, but it's rather complex. So most of the time, the public actually don't care about these things. There are one or two examples. So genetically modifying organisms is the one that always comes up. But that is a very unique and very distinct example. Most of the time, if you talk broadly about what's happening with a new technology, people will say, that's interesting, and get on with their lives. So there's much less risk there about talking about it than I think people realize.

The other thing, though, is even if there is a risk of people saying “hold on a minute, we don't like what's happening here,” better to have that feedback sooner rather than later, because the reality is people are going to find out what's happening. If they discover as a company or a research agency or a scientific group that you've been doing things that are dangerous and you haven't been telling them about it, when they find out after the fact, people get mad. That's where things get really messy. So it's far better to engage earlier and often. Sometimes, that does mean you're going to have to take advice and maybe change the direction that you go in, but far better to do that earlier in the process.

Ariel: Jack, did you have anything to add there?

Jack: Nope. I fear Andrew and I are agreeing too much.

Andrew: Let me try and find something really controversial to say that you're going to scream at me at.

Jack: I think you're probably the wrong person to do that, Andrew. I think maybe we could get Elon Musk on the phone…

Andrew: Yeah, although that's interesting. So not just thinking about Elon, but you've got a whole group of people in the technology sphere here who are very clearly trying to do what they think is the right thing. They're not in it primarily for fame and money, but they're in it because they believe that something has to change to build a beneficial future.

The challenge is, these technologists, if they don't realize the messiness of working with people and society and they think just in terms of technological solutions, they're going to hit roadblocks that they can't get over. So this to me is why it's really important that you've got to have the conversations. You've got to take the risk to talk about where things are going with the broader population. And by risks, I mean, you've got to risk your vision having to be pulled back a little bit so it's more successful in the long-term.

Ariel: So, actually, I mean, you mentioned Elon Musk. He says a lot of things that get picked up by the media, and it's perceived as fear mongering. But I’ve found a lot of times – and full disclosure, he supports us – but I’ve found a lot of times when I go back and look at what he actually said in its complete, unedited form and taken within context, it's not usually as extreme and it seems a lot more reasonable. So I was hoping you could both touch on the impact of media as well and how that's driving the discussion.

Jack: Well, I think it's actually less about media, because I think blaming the media is always the convenient thing to do. They're the convenient target. I think the question is about actually the culture in which Elon Musk sits and in which his views are received, which is extremely technologically utopian and which wants to believe that there are simple technological solutions to some of our most pressing problems. In that culture, it is understandable if seemingly seductive ideas, whether about artificial intelligence or about new transport systems, are taken. I would love there to be a more skeptical attitude so that when those sorts of claims are made, just as when any sort of political claim is made, that they are scrutinized and become the starting point for a vigorous debate about the world in which we want to live in. Because I think that is exactly what is missing from our current technological discourse.

Andrew: I would also say with the media, the media is, obviously, a product of society. We are titillated by extreme, scary scenarios. The media is a medium through which that actually happens. So I work a lot with journalists, and I would say I've had very few experiences with being misrepresented or misquoted where it wasn't my fault in the first place. So I think we've got to think of two things when we think of media coverage. First of all, we've got to get smarter in how we actually communicate, and by we I mean the people that feel we've got something to say here. We've got to work out how to communicate in a way that makes sense with the journalists and the media that we're communicating through. We've also got to realize that even though we might be outraged by something we see where we think it's a misrepresentation, that usually doesn't get as much traction in society as we think it does. So we've got to be a little bit more laid back with how uptight we get about how we see things reported.

Jack: Sorry, I was just going to say I have to head off in two minutes. But if there's anything else you want me to contribute, then I should do it now.

Ariel: We can end here then. Is there anything else that you think is important to add that we haven't had a chance to discuss as much?

Jack: I don't think so. No, I think that was quite nice coverage. I'm just sorry that Andrew and I agree on so much.

Andrew: Yeah. I would actually just sort of wrap things up. So, yes, there has been a lot of agreement. But, actually, and this is an important thing, it's because most people, including people that are often portrayed as just being naysayers, are trying to ask difficult questions so we can actually build a better future through technology and through innovation in all its forms. I think it's really important to realize that just because somebody asks difficult questions doesn't mean they're trying to stop progress, but they're trying to make sure that that progress is better for everybody.

Jack: Hear, hear.

Ariel: Well, I think that sounds like a nice note to end on. Thank you both so much for joining us today.

Andrew: Thanks very much.

Jack: Thanks, Ariel.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram