Skip to content
All Podcast Episodes

Podcast: AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene and Iyad Rahwan

Published
October 30, 2017

As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI?

To help consider these questions, Joshua Greene and Iyad Rawhan kindly agreed to join the podcast. Josh is a professor of psychology and member of the Center for Brain Science Faculty at Harvard University, where his lab has used behavioral and neuroscientific methods to study moral judgment, focusing on the interplay between emotion and reason in moral dilemmas. He’s the author of Moral Tribes: Emotion, Reason and the Gap Between Us and Them. Iyad is the AT&T Career Development Professor and an associate professor of Media Arts and Sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. He created the Moral Machine, which is “a platform for gathering human perspective on moral decisions made by machine intelligence.”

In this episode, we discuss the trolley problem with autonomous cars, how automation will affect rural areas more than cities, how we can address potential inequality issues AI may bring about, and a new way to write ghost stories.

This transcript has been heavily edited for brevity. You can read the full conversation here.

Transcript

Ariel: I'm Ariel Conn with the Future of Life Institute. As most of our listeners will know, we are especially concerned with ensuring AI is developed safely and beneficially. But as technically challenging as that may be, it also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. Two of the biggest questions we face are: how do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI?

To help consider these questions, Joshua Greene and Iyad Rawhan kindly agreed to join the show today. Josh is a professor of psychology and member of the Center for Brain Science Faculty at Harvard University. For over a decade, his lab has used behavioral and neuroscientific methods to study moral judgment, focusing on the interplay between emotion and reason in moral dilemmas. His more recent work examines how the brain combines concepts to form thoughts and how thoughts are manipulated into reasoning and imagination. Other interests include conflict resolution and the social implications of advancing artificial intelligence. He’s the author of Moral Tribes: Emotion, Reason and the Gap Between Us and Them.

Iyad is the AT&T Career Development Professor and an associate professor of Media Arts and Sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. A native of Aleppo, Syria, Iyad holds a PhD from the University of Melbourne, Australia and is an affiliate faculty at the MIT Institute of Data, Systems and Society.

Josh and Iyad, thank you so much for being here.

Joshua: Thanks for having us.

Iyad: Thanks for having us.

Ariel: So the first thing I want to start with is very broadly and somewhat quickly, how do we anticipate that AI and automation will impact society especially in the coming years, in the next few years?

Iyad: I think that obviously there are long-term implications of artificial intelligence technology which are very difficult to anticipate at the moment. But if we think more in the short term, I think AI basically has the potential to extract better value from the data we already have and that we’re collecting from all the gadgets and devices and sensors around us. And the idea is that we could use this data to make better decisions, whether it's micro decisions in an autonomous car that takes us from A to B safer and faster, or whether it's medical decision-making that enables us to diagnose diseases better or whether it's even scientific discovery, allowing us to do science more effectively and efficiently and more intelligently.

So I think AI is basically a decision technology. It’s a technology that will enable us to make better use of the data we have and make more informed decisions.

Ariel: Josh did you want to add something to that?

Joshua: Yeah, I agree with what Iyad said. Putting it a different way, you can think artificial intelligence adds value by, as he said, enabling us to extract more information and make better use of that information in building things and making decisions. It also has the capacity to displace human value. So to take one of the most widely-discussed examples these days of using artificial intelligence to promote medicine, to diagnose disease. On the one hand it's wonderful if you have a system that has taken in all of the medical knowledge we have in a way that no human could and uses it to make better decisions, that’s a wonderful thing. But at the same time that also means that lots of doctors might be out of a job or have a lot less to do than they otherwise might. So this is the double-edged sword of artificial intelligence, the value it creates and the human value that it displaces.

Ariel: I’m going to want to come back to that because I think there's a lot of interesting questions that I have surrounding that, but first, I want to dive into what I consider something of a sticky subject and that is the trolley problem, and how it relates to autonomous vehicles. And so a little bit of background for me first for listeners who aren’t familiar with this, we anticipate autonomous vehicles will be on the road more and more and one of the big questions is what do they do, how do they decide who gets injured if they know an accident is coming up. And I would love for one of you to explain what the trolley problem is and how that connects to this question of what do autonomous vehicles do in situations where there is no real good option.

Joshua: So the trolley problem is a set of moral dilemmas that philosophers have been thinking about, arguing about for many decades, and it also has served as a kind of platform for thinking about moral decision making in psychology and neuroscience. So one of the original versions of the trolley problem goes like this, we'll call it “the switch case.” A trolley is headed towards five people and if you don't do anything, they're going to be killed, but you can hit a switch that will turn the trolley away from the five and onto a side track. However on that side track, there's one unsuspecting person and if you do that, that person will be killed.

And so the question is, is it okay to hit the switch to save those five people's lives but at the cost of saving one life? And in this case, most people tend to say yes, it's okay to hit the switch. Some people would say you must hit the switch. And then we can vary it a little bit. So in one of the best known variations which we’ll call “the footbridge case,” the situation is different as follows: the trolley is now headed towards five people on a single track, over that track is a footbridge and on that footbridge is a large person, or if we don't want to talk about large people, say a person wearing a very large backpack. And you're also on the bridge and the only way that you can save those five people from being hit by the trolley is to push that big person off of the footbridge and onto the tracks below.

And of course, you may think why can't I jump myself? Well the answer is, you're not big enough to stop the train because you're not wearing a big backpack like that. How do I know this will work? And the answer is this is the movies, let's say, we know you can suspend disbelief. Assume that it will work, do you think it’s okay, even making those unrealistic assumptions, to push the guy off the footbridge in order to save five lives? Here, most people say no, and so we have this interesting paradox. That if we accept our assumptions, in both cases, you're trading one life for five, yet in one case it seems like it's the right thing to do, in the other case it seems like it's the wrong thing to do, at least to most people.

So philosophers have gone back and forth on these cases and tried to use them as a way to articulate a moral theory that would get the right answer or in particular try to come up with a justification or an explanation for why it is wrong to push the guy off the footbridge, but not wrong to hit the switch. And so this has been a source of … this has been a kind of moral paradox. I and other researchers in psychology and neuroscience have said, while independent of what's actually right or wrong in these cases, there's an interesting bit of psychology here. What's going on in people's heads that makes them say that it's wrong to push the guy off the footbridge and that's different from the switch case, where people are willing to go with the utilitarian answer? That is, the answer that produces the best overall consequences. In this case, saving five lives albeit at the cost of one. So we've learned a lot about moral thinking by studying how people respond to variations on these dilemmas.

One of the classic objections to these dilemmas, to using them for philosophical or psychological purposes, is that they're somewhat or even very unrealistic. My view, as someone who's been doing this for a long time, is that the point is not that they're realistic, but instead that they function like high contrast stimuli. Like if you're a vision researcher and you're using something like flashing black and white checkerboards to study the visual system, you're not using that because that's a typical thing that you look at, you're using it because it's something that drives the visual system in a way that reveals its structure and dispositions.

And in the same way I think that these high contrast extreme moral dilemmas can be useful to sort of sharpen our understanding of the more ordinary processes that we bring to moral thinking. Now, fast forward from those early days when I first started doing this research and now actually, trolley cases are, at least according to some of us, a bit closer to being realistic. That is to say, autonomous vehicles have to make decisions that are in some ways similar to these trolley cases, although in other ways I would say that they're quite different, and this is where Iyad’s lovely work comes in.

Iyad: So I can I guess take things off this point. Thank you Josh, for a very eloquent explanation and concise explanation of the trolley problem. Now, when it comes to autonomous vehicles, this is I think a very new kind of product which has two kinds of features. One is that autonomous vehicles are at least promised to be intelligent, adaptive entities that have a mind of their own. So they have some sort of agency. And they also make decisions that have life or death consequences on people, whether it's people in the car or people on the road. And I think as a result people are really concerned that product safety standards and old ways of regulating products that we have may not work in this situation, in part because the behavior of the vehicle may eventually become even different from the person who programmed it. Or that the programming just has to deal with such a large number of possibilities that it's really difficult to trust a programmer with making these kind of ethical judgments or morally consequential judgments, at least without supervision or input from other people.

In the case of autonomous cars, obviously the trolley problem can translate in a cartoonish way to a scenario with which an autonomous car is faced with only two options. The car is let’s say, going at a speed limit or below the speed limit on a street and for some reason, due to mechanical failure or something like that, is unable to stop and is going to hit it a group of pedestrians, let's say five pedestrians. The car can swerve and hit a bystander. Should the car swerve or should it just plow through the five pedestrians?

This has a structure that is very similar to the trolley problem because you're making similar tradeoffs between one and five people and the programming is not happening – the decision is not being taken on the spot, it's actually happening at the time of the programming of the car which I think can make things a little bit different. And there is another complication which is that we can imagine situations in which the person being sacrificed to save the greater number of people is the person in the car. So for instance, suppose the car can swerve to avoid the five pedestrians but as a result falls off a cliff or crashes into a wall, harming the person in the car.

So that I think also adds another complication especially that programmers are going to have to program these cars to appeal to customers. And if the customers don't feel safe in those cars because of some hypothetical situation that may take place in which they’re sacrificed, that pits the financial incentives against the potentially socially desirable outcome, which can create problems. I think these are some of the reasons why people are concerned about these scenarios.

Now, I think obviously a question that raises itself is, is this a good idea? Is it going to ever happen? One can argue it's going to be extremely unlikely. How many times do we face these kinds of situations as we drive today? So the argument goes: these situations are going to be so rare that they are irrelevant and that autonomous cars promise to be substantially safer than human-driven cars that we have today, that the benefits significantly outweigh the costs. And I think there is obviously truth to this argument, if you take the trolley problem scenario literally. But I think what the trolley problem is doing, or the autonomous car version of the trolley problem is doing, is it’s abstracting the tradeoffs that are taking place every microsecond, even now.

Imagine at the moment, you're driving on the road and then there is a large truck on the lane to your left and as a result you choose to just stick a little bit further to the right, just to minimize risk in case this car gets off its tracks or gets off its lane, for instance. We do these things without even noticing. We do it kind of instinctively. Now suppose that there's a cyclist on the right hand side or there could be a cyclist later on the right hand side, what you're effectively doing in this small maneuver is you're slightly reducing risk to yourself but slightly increasing risk to the cyclist. And we do this all the time and you can imagine a whole bunch of situations in which these sorts of decisions are being made millions and millions of times every day. For example, do you stay closer to the car in front of you or closer to the car behind you as you are on the highway. If you're faced with a difficult maneuver, do you break the law by moving across the other lane or do you kind of stick it out and just smash into the car in front of you, if the car in front of you stops all of a sudden.

We just use instinct nowadays to deal with these situations, which is why we don't really think about it a lot. And it's also in part because we can’t reasonably expect humans to make reasoned, well-thought-out judgments in these split-second situations. But now we have the luxury of deliberation about these problems and with the luxury of deliberation comes the responsibility of deliberation, and I think this is the situation that we’re facing ourselves at the moment.

Ariel: One of the issues that I have with applying the trolley problem to self-driving cars, at least from what I've heard of other people talking about it, is that so often it seems to be forcing the vehicle and thus the programmer of the vehicle to make a judgment call about whose life is more valuable. And I'm wondering, are those the parameters that we actually have to use? Can we not come up with some other parameters that people would agree are moral and ethical and don't necessarily have to say that one person's life is more valuable than someone else's?

Joshua: I don't think that there's any way to avoid doing that. I think the question is just, how directly and explicitly are you going to take on the question, or are you just going to set things in motion and hope that they turn out well? But I think the example that Iyad gave was lovely. That you're in that situation and the question is do you pass or get closer to the cyclists versus get closer to the truck that might hurt you. There is no way to avoid answering that question.

Another way of putting it is, if you're a driver, there's no way to avoid answering the question, how cautious or how aggressive am I going to be. How worried about my own safety or worried about the safety of other people or worried about my own convenience and getting to where I want to go am I going to be? You can’t not answer the question. You can not explicitly answer the question, you can say I don't want to think about that, I just want to drive and see what happens. But you are going to be implicitly answering that question through your behavior, and in the same way, autonomous vehicles can't avoid the question. Either the people who are designing the machines, training the machines or explicitly programming to behave in certain ways, they are going to do things that are going to affect the outcome – either with a specific outcome in mind or without a specific outcome in mind but knowing that outcomes will follow from the choices that they're making.

So I think we may not face often situations that are very starkly, cartoonishly trolley-like, but as Iyad said, the cars will constantly be making decisions that are not just about navigation and control of the vehicle, that they inevitably involve value judgments of some kind.

Ariel: To what extent have we actually asked customers what it is that they want from the car? I personally, I would swerve a little bit away from the truck if it's a narrow lane. I grip the steering wheel tighter when there is a cyclist on the other side of me and just try to get past as quickly as possible. I guess in a completely ethical world, my personal position is I would like the car to protect the person who's more vulnerable, who would be the cyclist. In practice what I would actually do if I were in that situation, I have a bad feeling I'd probably protect myself. But I personally would prefer that the car protect whoever is most vulnerable. Have other people been asked this question, and what sort of results are we getting?

Iyad: Well I think that your response actually very much makes the point that it's not really obvious what the right answer is because on one hand we could say we want to treat everyone equally. On the other hand, you have this self-protective instinct which presumably as a consumer, that's what you want to buy for yourself and your family. On the other hand you also care for vulnerable people. And different reasonable and moral people can disagree on what the more important factors and considerations should be and I think this is precisely why we have to think about this problem explicitly, rather than leave it purely to – whether it's programmers or car companies or any particular single group of people to decide.

Joshua: I think when we think about problems like this, we have a tendency to binarize it and say, so Ariel you said, “Well I think I would want to protect the most vulnerable person,” but it's not a binary choice between protecting that person or not, it's really going to be matters of degree. So imagine there's a cyclist in front of you going at cyclist speed and you either have to wait behind this person for another five minutes creeping along much slower than you would ordinarily go, or you have to swerve into the other lane where there's oncoming traffic at various distances. Very few people might say I will sit behind this cyclist for 10 minutes before I would go into the other lane and risk damage to myself or another car. But very few people would just, we’d hope, just blow by the cyclist in a way that really puts that person's life in peril.

The point here is that, it's a very hard question to answer because the answers that we have to give either implicitly or explicitly don't come in the form of something that you can write out in a sentence like, “give priority to the cyclist.” You have to say exactly how much priority in contrast to the other factors that will be in play for this decision. And that's what makes this problem so interesting and also devilishly hard to think about.

Ariel: This is sort of an interest to me psychologically but why do you think this is something that we have to deal with when we're programming something in advance and not something that we as a society should be addressing when it's people driving?

Iyad: I think that we basically have an unfortunate situation today in our society in which we very much value the convenience of getting from A to B. And we know that people, there are many very frequent situations that put our own lives and other people's lives at risk as we conduct this activity. In fact our lifetime odds of dying from a car accident is more than 1% which I find an extremely scary number, given all of the other things that can kill us. Yet somehow, we've decided to put up with this because of the convenience and we also at the same time cannot really blame people for not making a considered judgment as they make all of these maneuvers. So you sort of learn to live with it as long as people don't run through a red light or are not drunk, you don't really blame them for fatal accidents, we just call them accidents.

But now we have the luxury, thanks to autonomous vehicles that can make decisions and reevaluate situations, hundreds or thousands of times per second and adjust their plan and so on – we potentially have the luxury to make those decisions a bit better and I think this is why things are different now.

Joshua: I also think we have a behavioral option with humans that we don't have with self-driving cars at least for a very long time, which is with the human we can say, “Look, you're driving, you're responsible, and if you make a mistake and hurt somebody, you're going to be in trouble and you're going to pay the cost.” You can't say that to a car, even a car that's very smart by 2017 standards. The car isn't going to be incentivized to behave better, it's going to have to have the capacity – the motivation has to be explicitly trained or programmed in. So you don't have to … with humans you just tell them what the outcome expectation is and humans, after millions of years of biological evolution and cultural evolution thousands of years, are able to do a “not great but good enough” job at that. But in some ways, because the next generation of self-driving cars, they'll be very intelligent as cars go but they're not going to be as intellectually developed as humans, we have to make things explicit in a way that we don't for people.

Iyad: To follow up on that, I've spoken to some economists about this and they think in terms of liability. So they say you can incentivize the people who make the cars to program them appropriately by fining them and engineering the product liability law in such a way that would hold them accountable and responsible for damages in case something wrong happens. I think that could very well work and this may be the way in which we implement this feedback loop.

But I think the question remains what should the standards be against which we hold those cars accountable.

Joshua: I think that's an important question and also, let's say somebody says, “Okay, I make self-driving cars and I want to be accountable. I want to make them safe because I know I'm accountable.” They still have to program or train the car. So there's no avoiding that step, whether it's done through traditional legalistic incentives or other kinds of incentives.

Ariel: Okay, so I would love to keep asking questions about this, but there are other areas of research that you both covered that I want to get into. So I'm going to move on, but thank you for the discussion, that was awesome. So we have very good reason to expect AI to significantly impact everything about life in the future. And it looks like, Iyad, based on some of your research, that how AI and automation impact us could be influenced by where we live and more specifically, whether we live in smaller towns or larger cities. I was hoping you could talk a little bit about your work there and what you’ve found.

Iyad: Sure, so obviously a lot of people are talking about the potential impact of AI on the labor market and on employment. And I think this is a very complex topic that requires people, both labor economists as well as people from other fields to chip in on. But I think one of the challenges that I find interesting is, is the impact going to be equal? Is it going to be equally distributed across for example the entire United States or the entire world?

Clearly there are areas that may potentially benefit from AI because it improves productivity and it may lead to greater wealth, but it can also in the process lead to labor displacement. It could in the extreme case cause unemployment of course, if people aren’t able to retool and improve their skills so that they can work with these new AI tools and find employment opportunities.

So it’s a complex question that I think needs to take the whole economy into account, but if you try to just quantify, well, how large is this adjustment? Regardless of how this adjustment takes place. Are we expected to experience this in a greater way or in a smaller magnitude in smaller versus bigger cities? And I thought that the answer wasn't really obvious a priori. And here is why. On one hand we could say that big cities are the hub in which the creative class lives and this is where a lot of creative work happens, and this is where a lot of science and innovation and media production takes place and so on. So there are lots of creative jobs in big cities and that should make, because creativity is so hard to automate, it should make big cities more resilient to these shocks.

On the other hand if you go all the way back to Adam Smith and the idea of the division of labor, the whole idea of the division of labor is that people, individuals become really good at one thing. And this is precisely what spurred urbanization in the first industrial revolution, because instead of having to train extremely well-skilled artisans, you bring a whole bunch of people and one person just makes pins, and the other person just makes threads and the other person just puts the thread in the pin and so on. So even though the system is collectively more productive, individuals may be more automatable in terms of their tasks, because they have very narrowly-defined tasks.

It could be that on average, there are more of these people in big cities and that this outweighs the role of creative, small creative class in big cities. So this is why the answer was not obvious for us, but when we did the analysis, we found that indeed larger cities are more resilient in relative terms, and we are now trying to understand why that is and what is the composition of skills in jobs in larger cities that makes them different.

Ariel: And do you have ideas on what that is right now or are you still …

Iyad: Yes, so we, this research is still ongoing, but the preliminary findings are that basically in bigger cities, there is more production that requires social interaction and very advanced skills like scientific and engineering skills. So the idea is that in larger cities, people are better able to complement the machines because they have technical knowledge, so they're able to use new tools and new intelligent tools that are becoming available, but they also work in larger teams on more complex products and services. And as a result, social interaction, managing people are skills that are more difficult to automate as well.

And I think this goes, this continues a process that we're already seeing now that jobs in bigger cities, in urban areas are better paid and they involve greater collaborative work than in smaller or rural areas.

Ariel: So Josh, you've done a lot of work with the idea of “us versus them.” And especially as we're looking in this country and others at the political situation where it's increasingly polarized, and it's also increasingly polarized along this line of city versus smaller town. Do you anticipate some of what Iyad is talking about making the situation worse? Do you think there's any chance AI could actually help improve that?

Joshua: I certainly think we should be prepared for the possibility that it will make the situation worse. I think that's the most natural extrapolation although I don't consider myself enough of an expert to have a definitive opinion about this. But the central idea is that as technology advances, you can produce more and more value with less and less human input, although the human input that you need is more and more highly skilled.

If you look at something like Turbo Tax and other automated tax form production systems. Before, you had lots and lots and lots of accountants and many of those accountants are being replaced by a smaller number of programmers and super-expert accountants and people on the business side of these enterprises. And if that continues, then yes, you have more and more wealth being concentrated in the hands of the people whose high skill levels complement the technology and there is less and less for people with lower skill levels to do. Not everybody agrees with that argument, but I think it's one that we ignore at our peril, that at least it's plausible enough that we should be taking seriously the possibility that increased technology is going to be driving up inequality economically and continue to create an even more stark contrast between the centers of innovation and technology-based business and the rest of the country and the world.

Ariel: And as we continue to develop AI, still looking at this idea of us versus them, do you anticipate that AI itself would become a “them,” or do you think it would be people working with AI versus people who don't have access to AI or do you envision other divisions forming? How do you see that playing out?

Joshua: Well I think that the idea of the AI itself becoming the “them,” I mean that is really a sort of science fiction kind of scenario, that's the Terminator sort of scenario. Perhaps there are more plausible versions of that. I am agnostic as to whether or not that could happen eventually, but this would involve advances in artificial intelligence that are beyond anything we understand right now. Whereas the problem that we were talking about earlier, this is to say, humans being divided into a technological, educated, and highly paid elite as one group and then the larger group of people who are not doing as well financially – that “us-them” divide, you don't need to look into the future, you can see it right now.

Iyad: I would follow up on that by saying that I think the “us versus them,” you know, I don't think that the robot will be the “them” on their own, but I think the machines and the people who are very good at using the machines to their advantage, whether it's economic or otherwise, will collectively be a “them.” It's the people who are extremely tech savvy, who are using those machines to be more productive or to win wars and things like that. I think that is a more real possibility. So it doesn't really matter if the machines have so much agency in this regard. But there would be, I think, some sort of evolutionary race between human-machine collectives. I wonder if Josh agrees with this.

Joshua: I certainly think that that's possible. I mean that is a tighter integration in the distant, but maybe not as distant as some people think, future, if humans can enhance themselves in a very direct way, we’re talking about things like brain-machine interfaces and cognitive prostheses of various kinds. Yeah, I think it's possible that people who are technologically enhanced could have a competitive advantage and set off a kind of economic arms race or perhaps even literal arms race of a kind that we haven't seen.

I hesitate to say, “Oh, that's definitely going to happen.” I’m just saying it's a possibility that makes a certain kind of sense.

Ariel: And do either of you have ideas on how we can continue to advance AI and address these issues, the sort of divisive issues that we're talking about here? Or do you think they're just sort of bound to occur?

Iyad: I think there are two new tools at our disposal. I think one of them is experimentation and the other one is some kind of augmented regulation, machine-augmented regulation. With experimentation, I think we need more openness to trying things out so that we know and understand the kinds of tradeoffs that are being made by machines. So, let's take the case of autonomous cars just as an example, that if all cars have one single algorithm, and a certain number of pedestrians die and a certain number of cyclists die and a certain number of passengers die, we won’t really understand whether there are tradeoffs that are caused by particular features of the algorithms running those cars.

Contrast this today with SUVs versus regular cars, or for instance, cars with a bull bar in front of them. You know, these bull bars, which are like metallic bars at the front of the car that increase safety for the passenger in the case of collision, but they have disproportionate impact on other cars, on pedestrians and cyclists, and they're much more likely to kill them in the case of an accident. And as a result, by making this comparison, by identifying that cars with bull bars, with these physical feature, are actually worse for certain group, the trade off was not acceptable, and many countries have banned them, for example the UK, Australia, and many European countries have banned them. But the US hasn't banned them, as far as I know.

If there was a similar trade off being caused by a software feature, then, we wouldn’t even know unless we allowed for experimentation as well as monitoring. So if we looked at the data to identify whether a particular algorithm is making for very safe cars for customers, but at the expense of a particular group.

And the other issue was the idea of machine-augmented regulation that I think, in some cases, these systems are going to be so sophisticated and the data is going to be so abundant that we won’t really be able to observe them and regulate them in time. You know, think of algorithmic trading programs that are causing flash crashes because they trade at submillisecond and doing arbitrage against each other. Now, no human being is able to observe these things fast enough to intervene, but you could potentially insert another algorithm, a regulatory algorithm or what some people have called an oversight algorithm, that will observe other AI systems in real time on our behalf, to make sure that they behave.

Ariel: And Josh, did you have anything you wanted to add?

Joshua: Yeah, well I think that there are two general categories of strategies for making things go well. There are technical solutions to things and then there's the broader social problem of having a system of governance that is generally designed and can be counted on to most of the time produce outcomes that are good for the public in general. Iyad gave some nice examples of technical solutions that may end up playing a very important role as things develop.

Right now, I guess the thing that I'm most worried about is that if we don't get our politics in order, especially in the United States, we're not going to have a system in place that's going to be able to put the public's interest first. Ultimately, it's going to come down to the quality of the government that we have in place, and quality means having a government that distributes benefits to people in what we would consider a fair way and takes care to make sure that things don't go terribly wrong in unexpected ways and generally represents the interests of the people.

And I worry that instead of getting closer to that ideal, we're getting farther away and so I think we should be working on both of these pairs in parallel. We should be developing technical solutions to more localized problems where you need an AI solution to solve a problem created by AI. But I also think we have to get back to basics when it comes to the fundamental principles of our democracy and preserving them.

Ariel: All right, so then the last question for both of you is, as we move towards smarter and more ubiquitous AI, what worries you most and what are you most excited about?

Joshua: I think the thing that I'm most concerned about are the effects on labor and then the broader political and social ramifications of that. I think that there are even bigger things that you can worry about, the kinds of existential risks of machines taking over and things like that. I take those more seriously than some people, but I also think that there's just a huge amount of uncertainty about whether we’re, in our lifetimes at least, going to be dealing with those kinds of problems.

But I'm pretty confident that a lot of labor is going to be displaced by artificial intelligence. I think it is going to be enormously politically and socially disruptive, and I think we need to plan now and start thinking, not just everybody relying on their prejudices to say, “This is what I think is most likely,” and only preparing for that. Instead, we need to consider a range of possibilities and be prepared for the worst of them. But I think that the displacement of labor that's coming with self-driving cars especially in the trucking industry, I think that's just going to be the first and most obvious place where millions of people are going to be out of work and it's not going to be clear what's going to replace it for them.

Ariel: And what are you excited about?

Joshua: Oh and what am I excited about? I forgot about that part. I'm excited about the possibility of AI producing value for people in a way that has not been possible before on a large scale. So we talked about medicine. I mean, imagine if anywhere in the world that's connected to the Internet, you could get the best possible medical diagnosis for whatever is ailing you. That would be an incredible life-saving thing. And I think that that's something that we could hope to see in our lifetimes. When I think about education, part of why education is so costly is because you can only get so much just from reading, you really need someone to interact with you and train you and guide you and answer your questions and say, “Well you got this part right, but what about that?”

And as AI teaching and learning systems get more sophisticated, I think it's possible that people could actually get very high quality educations with minimal human involvement and that means that people all over the world could unlock their potential. And I think that that would be a wonderful transformative thing. So I'm very optimistic about the value that AI can produce, but I am also very concerned about the human value and therefore human potential for making one's own livelihood that it can displace.

Ariel: And Iyad, what do you think? What are you worried about and what are you excited about?

Iyad: So one of the things that I'm worried about is the way in which AI and specifically autonomous weapons are going to alter the calculus of war. So, I think at the moment, in order to mobilize troops to war to aggress on another nation for you know whatever it is, be it to acquire resources or to spread influence and so on – today, you have to mobilize humans, you have to get political support from the electorate, you have to handle the very difficult process of bringing back people in coffins, and the impact that this has on electorates.

I think this creates a big check on power and it makes people think very hard about making these kinds of decisions. But I think with AI, when you’re able to wage wars with very little loss to life, especially if you're a very advanced nation that is at the forefront of this technology, then I think you have a disproportionate power. It's kind of like a nuclear weapon, but maybe more because it’s much more customizable. It's not an all out or nothing, total annihilation or not. You could go into and start all sorts of wars everywhere. All you have to do is just provide more resources, but you're acquiring more resources.

I think it's going to be a very interesting shift in the way that people like superpowers think about wars and I worry that this might make them trigger happy, and this may cause a new arms race and other problems. So I think a new social contract needs to be written so that this power is kept in check and that there’s a bit more thought that goes into this.

On the other hand, I'm very excited about the abundance that will be created by AI technologies, because we’re just going to be … we're going to optimize the use of our resources in many ways. In health and in transportation, in energy consumption and so on, there are so many examples in recent years in which AI systems are able to discover ways in which no human… Even the smartest humans haven't been able to optimize energy consumption maximally in server farms, for example. But now, recently DeepMind has done this for Google, and I think this is just the beginning.

So I think we'll have great abundance, we just need to learn how to share it as well.

Ariel: All right, so one final thought before I let you both go. This podcast is going live on Halloween, so I want to end on a spooky note. And quite conveniently, Iyad’s group has created Shelley which, if I’m understanding it correctly, is a Twitter chatbot that will help you craft scary ghost stories. Shelley, I'm assuming, is a nod to Mary Shelley who wrote Frankenstein, which is, of course, the most famous horror story about technology. So Iyad, I was hoping you could tell us a bit about how Shelley works.

Iyad: Yes, well this is our second attempt at doing something spooky for Halloween. Last year we launched the nightmare machine, which was using deep neural networks and style transfer algorithms to take ordinary photos and convert them into haunted houses and zombie-infested places. And that was quite interesting; it was a lot of fun. More recently, now we've launched Shelley, which people can visit on shelley.ai, and it is named after Mary Shelley who authored Frankenstein.

This is a neural network that generates text and it's been trained on a very large data set of over 100 thousand short horror stories from a subreddit called No Sleep. And so it's basically got a lot of human knowledge about what makes things spooky and scary, and the nice thing is that it generates part of the story and people can tweet back at it a continuation of the story and then basically take turns with the AI to craft stories. And we feature those stories on the website afterwards. So I think this is the, if I'm correct, this is the first collaborative human-AI horror writing exercise ever.

Ariel: Well I think that's great. We will link to that on the site and we’ll also link to Moral Machine which is your autonomous vehicles trolley problem. And Josh if you don't mind I'd love to link to your book as well. Is there anything else?

Joshua: That sounds great.

Ariel: Alright, well thank you both so much for being here. This was a lot of fun.

Joshua: Thanks for having us.

Iyad: Thank you, it was great.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram