Skip to content
All Podcast Episodes

Podcast: Law and Ethics of Artificial Intelligence

Published
31 March, 2017

The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State, where he studies the ethics of technology.

In this podcast, we discuss accountability and transparency with autonomous systems, government regulation vs. self-regulation, fake news, and the future of autonomous systems.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety above or read the full transcript here.

This podcast and transcript were edited by Tucker Davey.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Transcript

Ariel: I'm Ariel Conn with the Future of Life Institute. Today I'm joined by Matt Scherer and Ryan Jenkins to discuss some of the legal and ethical issues facing AI, especially regarding things like autonomous weapons and self-driving cars. Matt is an attorney and legal scholar based in Portland, Oregon, who’ s scholarship focuses on the intersection between law and artificial intelligence. He maintains an occasional blog on the subject at lawandai.com, and he also practices employment at Buchanan Angeli, Altschul & Sullivan LLP.

Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State in San Luis Obispo, California. He studies the ethics of technology, like cyberwar, autonomous weapons, driverless cars, and algorithms. Matt and Ryan, thank you both for joining me today.

Ryan: Yeah, thank you.

Matt: Glad to be here.

Ariel: Great. So I typically think of ethics as the driving force behind law, which may or may not be true, but as such, I wanted to start with you, Ryan. I was hoping you could talk a little bit about what some of the big ethical issues you think are facing us today when it comes to artificial intelligence.

Ryan: Yeah, so I think the relationship between ethics and law is complicated, and I think that the missions of the two of them run in parallel. And I think very broadly speaking, the mission of both ethics and law might be to discover how to best structure life within a community and to see to it that that community does flourish once we know those certain truths. So ethics does some of the investigation about what kinds of things matter morally, what kinds of lives are valuable, how should we treat other people, and how should we ourselves live. Law does an excellent job of codifying those things and enforcing those things.

But there's really an interplay between the two. I think that we see law replying to or responding to ethical arguments, and we see ethicists certainly prodded on in their mission by some of the things that lawyers and legal scholars say, too. It's a sort of give and take. It's a reciprocal relationship.

In terms of artificial intelligence, well I think that we're undergoing a pretty significant shift in a lot of different areas of society, and we're already employing artificial intelligence into a lot of spheres of human activity that are morally important, spheres that are morally laden, which is to say they have consequences for people's lives that really significantly matter. One of the easiest ways of telling whether a decision is a moral decision is whether it concerns the distribution of benefits and burdens or whether it stands to make some people better off and some people worse off. And we're seeing that take place right now with artificial intelligence. That adds a lot of new wrinkles to these decisions because oftentimes the decisions of AI are inscrutable. They're opaque to us, they're difficult to understand, they might be totally mysterious. And while we're fascinated by what AI can do, I think oftentimes the developers of AI have gotten out ahead of their skis, to borrow a phrase from former vice president, Joe Biden, and have implemented some of these technologies before we fully understand what they're capable of and what they're actually doing and how they're making decisions. That seems problematic. That's just one of the reasons why ethicists and been concerned about the development and the deployment of artificial intelligence.

Ariel: And can you give some examples of that?

Ryan: Yeah, absolutely. There was an excellent piece by Pro Publica, an investigative piece, I think last year, about bias in the criminal justice system, where they use so-called risk assessment algorithms to judge, for example, a person's probability of re-committing a crime after they're released from prison.

A couple companies produce algorithms that take in several data points, over a hundred data points, and then spit out an estimate. They literally predict this person's future behavior. It's like out of something in minority report. And they try to guess how likely this defendant is, say after they've been convicted of a crime, how likely is this defendant of committing another crime. Then they turn this information over to the judge, and the judge can incorporate this kind of verdict along with other things, other pieces of information, and their own human judgment and intuition, to make a judgment, for example, of how long this person should serve in prison, what kind of prison they should serve in, what their bail should be set at, or what kind of parole they should be subject to, that kind of thing.

And Pro Publica did an audit of this software, and they found that it's actually pretty troublingly unreliable. Not only does it make mistakes about half the time - they said it was slightly better than a coin flip at predicting whether someone would re-commit a crime. Slightly better than a coin flip, but interestingly and most troublingly, it made different kinds of mistakes when it came to white defendants and black defendants. So it was systematically underestimating the threat from white defendants and systematically overestimating the threat to society from black defendants. Now what this means is that white defendants were being given leaner sentences or being let off early, black defendants as a group were being given harsher sentences or longer sentences. And this is really tremendously worrisome.

And when they were asked about this, when the company that produced the algorithm was asked about this, they said ‘well look, it takes in something like 137 factors, but race is not one of them’. Now if we just had the artificial intelligence check a box that said ‘oh by the way, what's the race of this defendant’, that would clearly raise some pretty significant red flags, and that would raise some clear constitutional issues too, about equal protection. And Matt, I would defer to you on that kind of stuff because that your expertise. But as a matter of fact, the AI didn't ask that kind of question. So it was making mistakes that were systematically biased in a way that was race-based, and it was difficult to explain why it was happening. These are the kind of problems. This is the kind of opaque decision making that's taking place by artificial intelligence in a lot of different contexts. And when it comes to things like distributing benefits and burdens, when it comes to deciding prison sentences, this is something that we should be taking a really close and careful look at.

Ariel: Okay. So I want to stick with you just for a minute or two longer. As AI advances and as we're seeing it become even more capable in coming years, what are some of the ethical issues that you anticipate cropping up?

Ryan: Besides the question of transparency versus opacity, the question of whether we can understand and scrutinize and interrogate the way that artificial intelligence is making these decisions, there are some other concerns, I think, with AI. One of these is about the way that benefits of AI will be distributed. There's been a lot of ink spilled, especially recently just in the last couple of years about automation and the threat that automation poses to unemployment. Some of the numbers being reported here, even out of studies coming out of places like Oxford, some of the numbers being reported are quite alarming. They say for example, as many of 50% of American jobs could be eliminated by automation just in the next couple decades. Now, even if those estimates are off by an order of magnitude. Even if it's merely 5% of jobs, we're still talking about several million people or tens of millions of people being automated out of a job in a very short span. That's a kind of economic shock that we're not always used to responding to. So it will be an open question about how society, how the government, how the economy's able to respond to that.

And to get to the ethical point, besides the obvious fact that being unemployed is bad and having unemployed people is bad for society in several ways, it raises more foundational questions I think. Questions I’ve been thinking about a bit recently about, for example, the way that we think about work, the way that we think about wages, the way that we think about people having to "earn a living" or "contribute to society." These are moral claims, claims about when someone should be kept alive, basically. The idea that someone needs to work in order to be kept alive. And many of us, or most of us walk around with some kind of moral claim like this in our back pocket without fully considering it, I think, and considering the implications. And I think that automation, just to give one more example, is really going to put some challenges to that.

So I think that those are some pretty clear concerns. There are other concerns with specific examples of artificial intelligence in different contexts. I suppose later today we'll talk about driverless cars or autonomous weapons or "killer robots." And those raise their own interesting ethical problems, and even farther down the line if we want to get really wild and outlandish, there are questions about whether artificial intelligences could ever become artificially conscious, and if that's the case, would robots be entitled to the same kinds of legal rights or moral rights that you and I have.

That question is a bit more farfetched and a bit more science fiction, but many people think that that kind of artificial consciousness is something we might see sometime this century.

Ariel: Okay. Thank you. Matt, I want to, going back, yes, I do want to get into autonomous vehicles and weapons and all of that stuff here soon, but first, Matt, I wanted to ask you very similar questions. What are some of the big legal issue facing us today when it comes to artificial intelligence?

Matt: One interesting thing about artificial intelligence in the legal sphere is that it's still largely a blank slate, and we are just now starting to see the first sets of what you might call hard law coming down that relates to artificial intelligence. That's specifically in the area of autonomous vehicles.

Up until now really, there's been lots of artificial intelligence systems that have been operating, particularly in the internet, but there's really been no government regulation that treats artificial intelligence as in some way different from any other product or technology that has been developed in the past. The law has basically been able to operate under the old assumptions and principles of our legal system.

I think that eventually that's going to become very difficult. The reason for that is several fold. The first I'd actually say is simply that machines aren't people. And the way that legal systems across the entire world work is by assigning legal rights and legal responsibilities to people. The assumption is that any sort of decision that has an impact on the life of another person is going to be made by a person. So when you have a machine that is making the decisions rather than humans, one of the fundamental assumptions of our legal system goes away. Right now, it's not that big of a deal because in most spheres, we are not delegating very important decisions to AI systems. That's starting to change a little bit, but we seem to be content right now with taking a wait and see approach. I think eventually that's going to become very difficult because certainly there seems to be the promise of AI disrupting and displacing human decision makers out of a wide variety of sectors and industries. And as that happens, it's going to be much more complicated to come up with lines of legal responsibility and reliability.

Another major issue that I think is going to come up is one that Ryan didn't just touch on, but very much highlighted, and that is transparency. That is already becoming, I think, kind of a critical focus. Perhaps the issue on which people have concerns or interest in the safety, ethics, and law related to AI have focused on. Transparency is, I think, something that is a natural response. You want transparency for things that you don't understand. That's one reason why I think a lot of people who are interested in this space have focused on transparency as a kind of control method or a way of ensuring the safety of AI without directly regulating it. They're hoping to convince manufacturers and designers of these systems to make them more transparent. I think that's a great idea, and I really do think that in the modern information age, transparency is perhaps the best guarantor of safety that we have for technologies. But I don't think it's a cure-all, and one of the issues that we're going to run into is that, I don't think we can even really comprehend at this point is what our society is going to be like 50 years from now if a huge number of industries ranging from medicine to law to financial services to you name it, is in large part being run by the decisions of machines. We just don't know what that society will look like. And at some point, even if the humans understand how these systems work and have a decent understanding of their methods of operation, once we live in a society where critical decisions are routinely made by machines, the question is how much control can humans really say that they still have in that circumstance. That's going to create all sorts of ethical and legal challenges down the road.

Ariel: So even just with this example that Ryan gave, what happens legally with, say the defendants who get harsher sentences or something? Can they sue someone? Do you know what's happened with that?

Matt: I have not heard specifically about whether there has been legal action taken against the manufacturers of the systems that were involved in that. Obviously, the defendants have the option of individually appealing their sentences and pointing out that these systems are subject to systematic biases and errors. This is one reason why I'm glad to have the opportunity to speak to other people who are working in this space because this is actually an issue that hadn't been brought to my attention.

Right now, no, I don't think there is any clear line of accountability for the people who designed and operated the AI system. I think that our legal system probably is operating under the assumption that the existing remedies for criminal defendants of appeal, and if that fails, habeas corpus and other forms of post conviction relief, that use Latin terms that I won't bore you with.

But I don't think there's any system in place that really addresses, well, you need to not just address these biases and errors after the fact for the individual cases. We need to somehow hold someone accountable for allowing these mistakes to happen in the first place. I don't think that we're there right now.

Ariel: Okay. And then, I want to go back to, you were talking earlier about decision making with autonomous technologies, and one of the areas where we see this starting to happen and likely happening more is of course with both self-driving cars and autonomous weapons. Ryan, I know that's an area of focus for you, and Matt, I think these are areas where you've also done some research. So I was hoping you could both talk about what some of the ethical and legal implications are in those two specific spheres.

Matt: I actually would like to back up a quick second and just comment on something that Ryan said at the very beginning of our conversation. And that's that I completely agree that there's an interplay between law and ethics, and that they kind of work in parallel towards the same goal. And I actually would bring in yet another field to explain what the goal of both law and ethics is, and that's to maximize society-wide utility. The idea I think behind both law and ethics is how do we make a better society, and how do we structure people's behavior in the way that makes everybody most satisfied with being in that society.

One reason why I think autonomous vehicles is such a hot topic and is such a huge issue is for the past century, motor vehicles has been by far the dominant way in which people in the industrialized world move around. And with the advent of autonomous systems taking over those functions, you're basically taking the entire transportation system of the world and putting a large amount of the control of it in the hands of autonomous machines. And that’s - it's fascinating in a way that that’s one of the first areas where it seems like AI is going to have its breakthrough moment in the public consciousness. It's going to be in an area that arguably is one of the most high visibility industries in the world.

Right now, we are just starting to see regulations that kind of get rolled out in the space of autonomous vehicles. California released, I think about two weeks ago, their final set of draft regulations for autonomous vehicles, and I actually was starting to read through them this morning, and it's a very fast-moving field. Part of the problem with relying on law to set standards of behavior is that law does not move as fast as technology does. I think that it's going to be a long time still before the really critical changes in our legal systems and the various legal regimes governing automobiles are changed in a way that allows for the widespread deployment of autonomous vehicles.

Now, what happens in the meantime is that, that means that human drivers will continue operating motor vehicles for probably the next decade in the vast majority of cases, but I think we're going to see lots of specific driving functions being taken over by automated systems. I think one thing that I could certainly envision happening in the next 10 years is that pretty much all new vehicles while they're on an expressway are controlled by an autonomous system, and it's only when they get off an expressway and onto a surface street that they switch to having the human driver in control of the vehicle.

So, little by little, we're going to see this sector of our economy get changed radically. And because I don't think the law is well-equipped to move fast enough to manage the risks associated with it, I think that it's important to talk about the ethical issues involved, because in many ways, I think the designers of these systems are going to need to be proactive in ensuring that their products work to the benefit of the consumers who purchase them and the public around them that is interacting with that vehicle.

Ariel: So on that note, Ryan?

Ryan: Yeah, I think that's supremely well put, and there's something that you said, Matt, that I want to highlight and reiterate, which is that technology moves faster than the law does. And I'll give my own mea culpa here on behalf of the field of ethics, on behalf of moral philosophy, because certainly technology often moves faster than moral philosophy moves, too.

And we run into this problem again and again. One of my favorite philosophers of technology, Langdon Winner, is a professor in New York, and his famous view is that we are sleepwalking into the future of technology. We're continually rewriting and recreating these structures that affect human life and how we'll live, how we'll interact with each other, what our relationships with each other are like, what we're able to do, what we're encouraged to do, what we're discouraged from doing. We continually recreate these kinds of constraints on our world, and we do it oftentimes without thinking very carefully about it. Although, I might try to even heighten what he said by saying we're not just sleepwalking into the future, but sometimes it seems like we're trying to ‘sleep run’ into the future, if such a thing is possible, just because technology seems to move so fast. Technology, to paraphrase or to steal a line from Winston Churchill, technology seems to get halfway around the world before moral philosophy can put its pants on. And we're seeing that happening here with autonomous vehicles.

I think that there's a lot of serious ethical issues that the creation and the deployment of autonomous vehicles raise. The tragedy to my mind is that manufacturers are still being very glib about these. For example, they find it hard to believe that the decision of how and when to brake or accelerate or steer is a morally loaded decision. To reiterate something that I said earlier in this interview, any decision that has an effect on another person, and Matt, you said something similar about the law, what kinds of decisions is the law worried about? Well any kind of decision that a human being makes that affects another decision, that's something about which the law might have something to say.

The same is true for moral philosophy. Any kind of decision that has an impact on someone else's well being, especially when it's something like trying to avoid a crash, you're talking about causing or preventing serious injury to someone or maybe even death. We know that tens of thousands of people die on US roads every year. Oftentimes those crashes involve choices about who is going to be harmed and who's not, even if that's, for example, a tradeoff between someone outside the car and a passenger or a driver inside the car.

These are clearly morally important decisions, and it seems to me that manufacturers are still trying to brush these aside. They're either saying that these are not morally important decisions, or they're saying that the answers to them are obvious, to which the hundreds of moral philosophers in the country would protest. They're certainly not always questions with obvious answers. Or if they're difficult answers, if the manufacturers admit that they're difficult answers, then they think, ‘well the decisions are rare enough that to agonize over them might postpone other advancements in the technology’. That's a legitimate concern, if it were true that these decisions were rare, but there are tens of thousands of people killed on US roads and hundreds of thousands who are injured every year. So these are not rare occurrences that involve moral trade offs between people.

Ariel: Okay. I'd like to also look at autonomous weapons, which pose their own interesting ethical and legal dilemmas, I'm sure. Ryan, can you start off on that a little bit, talking about what your take on some of the ethical issues are?

Ryan: Sure. Autonomous weapons are interesting and fascinating, and they have perhaps an unmatched ability to captivate the public interest and the public imagination or at least the public nightmares. I think that's because pretty much all of the situations with which we're familiar with autonomous weapons are things like Terminator or 2001, if you consider HAL to be a killer robot. These are cases in which autonomous weapons are being portrayed as harbingers of doom or these unstoppable and cold, unthinking, killing machines.

The public has a great deal of anxiety and trepidation about autonomous weapons, and I think a lot of that is merited. So I begin with an open mind, and I begin by assuming that the public could very well be right here. There could very well be something that's uniquely troubling, uniquely morally problematic about delegating the task of who should live and who should die to a machine. But once we dig into these arguments, my colleagues and I, or the people that I co-author with, it’s hard to pinpoint. It's extremely difficult to pinpoint exactly what's problematic about killer robots. And once again, we find ourselves plumbing the depths of our deepest moral commitments and our deepest moral beliefs, beliefs about what kinds of things are valuable and how we should treat other people and what the value of human life is, and what makes war humane or inhumane. These are the questions that autonomous weapons raise.

So there are some very obvious, sort of practical concerns. We might think, for example - we’d be right to think, today, that machines probably aren't reliable enough to make decisions in the heat of battle, to make discernments in the heat of battle about which people are legitimate combatants, which people are legitimate targets, and which people are not, what kinds of people are civilians or noncombatants who should be spared.

But if we imagine a far off future, if we imagine a future where robots don't make those kinds of mistakes, those kinds of empirical mistakes where they're trying to determine the state of affairs around them, where they're trying to determine not just whether someone is wearing a uniform, but for example, whether they're actively contributing to hostilities. This is the kind of language that international law uses.

If we imagine a situation where robots are actually pretty good at making those kinds of decisions where they're perhaps even better behaved than human soldiers, where they don't get confused, they don't get angry or vengeful, they don't see their comrade killed right next to them and go on a killing spree or go into some berserker rage. And we imagine a situation where they're not racist, or they don't have the kinds of biases that humans are often vulnerable to.

In short, if we imagine a scenario where we can greatly reduce the number of innocent people killed in war, the number of people killed by collateral damage, this starts to exert a lot of pressure on that widely held public intuition that autonomous weapons are bad in themselves, because it puts us in the position then of insisting that we continue to use human war fighters to wage war even when we know that will contribute many more people dying from collateral damage. To put it simply, that's an uncomfortable position for someone to be in. That's an uncomfortable position to defend. Those are the kinds of questions that we investigate when we think about the morality of autonomous weapons. And of course, I could, if you're interested, I could go over lots of the moral arguments on either side, but that's a very broad bird's eye view of where the conversation stands now.

Ariel: So actually, one question that comes to mind when you're talking about this is I would be worried what would happen - even if you can make an argument that there's ethical reasons for using autonomous weapons, I would be worried that you're going to get situations that are very similar to what we have now where the richer, more powerful countries have these advanced technologies, and the poorer countries that don't have them are the ones that are getting attacked.

Ryan: I think you're absolutely right about that. That is a very real concern. It's what we might call a secondary concern about autonomous weapons, because if you had a position like that, you might say something like this, well there's nothing wrong intrinsically with using a robot or using a machine to decide who should live and who should die. We'll put that question aside, but we still prefer to live in a world where nobody has autonomous weapons, rather than in a world in which they are unequally distributed and where this leads to problematic differentials in power or domination and where it cements those kinds of asymmetries on the international stage.

If you had that position, I think you'd be quite reasonable. That could very well be my position. That's a position that I'm very very sympathetic to, but you'll notice that that's a position that sidesteps the more fundamental question, the more fundamental moral question of what's wrong with using killer robots in warfare. Although, I wholeheartedly agree that I think a world with no autonomous weapons might very much be better in a world in which some people have them and some people don't.

Ariel: One of my other questions, Matt, is going to be more directed towards you, and that is especially as we're transitioning into autonomous weapons that would be more advanced, how do we deal with accountability?

Matt: Well, first, if you don't mind, I'd like to talk about a few of the points Ryan made. First off, Ryan, there is almost nothing you said that I disagree with me. In fact, there was nothing that you said that I noticed that I disagree with.

Ryan: Good to know.

Matt: One thing I want to highlight is it really seems to me that many of the arguments against autonomous weapons are arguments that could be applied equally to almost any other type of military technology. The potential for misuse, the fact that wealthier countries are going to have easier access to them than poorer countries. The only unique to autonomous weapons arguments that I hear, are 1) that it's just morally wrong to delegate decisions about who lives and dies to machines, but of course, that's going to be an issue with autonomous vehicles, too. Autonomous vehicles will have to make split second decisions in all likelihood about whether to take a course of action that will result in the death of a passenger in their car or passengers in another car. There's all sorts of moral tradeoffs. I don't think we necessarily have an inherent issue in letting a machine decide whether a human life should end. And of course, Ryan almost took the words out of my mouth when he described there's plenty of reasons to think that in a lot of ways autonomous weapons could be superior at military decisions in terms of rash decisions that result in the loss of human life.

That being said, one very real fear that I have about the rise of autonomous weapons is that they are going to inherently be capable of reacting on time scales that are shorter than humans' time scales in which they can react. I can very easily imagine it reaching the point very quickly where the only way that you can counteract an attack by an autonomous weapon is with another autonomous weapon. Eventually, having humans involved in the military conflict will be the equivalent of bringing bows and arrows to a battle in World War II. There's no way that humans with their slow reaction times will be able to effectively participate in warfare.

That is a very scary scenario, because at that point, you start to wonder where in the process human decision makers can enter into the military decision making process that goes on in warfare. And that, I think, goes back to the accountability issue that you brought up. The issue with accountability right now is that there's very clear, well-established laws in place about who is responsible for specific military decisions, under what circumstances a soldier is held accountable, under what circumstances their commander is held accountable, on what circumstances the nation is held accountable. That's going to become much blurrier when the decisions are not being made by human soldiers at the ground level, but rather by autonomous systems. It's going to become even more complicated as machine learning technology is incorporated into these systems where they learn from their observations and experiences - if you want to call it that - in the field on the best way to react to different sorts of military situations.

On the same point, it would seem to me to be almost unfair to hold the original manufacturer of an autonomous system responsible for the things that that system learned after it was outside the creator's control. That issue is just especially palpable in the autonomous weapons sphere because there's obviously no more stark and disturbing consequence of a bad decision than the death of a human being.

I think that as with all other areas of autonomous system decision making, it isn't clear where the lines of accountability will lead. And we are going to need to think about how we want to develop very specific rules - assuming that there isn't a complete ban on autonomous weapons - come up with very specific rules that make it clear where the lines of responsibility lie. I suspect that is going to be a very very vigorously disputed conversation between the different interest groups that are involved in establishing the rules of warfare.

Ariel: Sort of keeping in line with this, but also changing. Matt, in recent talks, you mentioned that you're less concerned now about regulations for corporations because it seems like at the moment corporations are making an effort to essentially self-regulate. I was hoping you could talk a little bit about that. Then I'm also really interested in how that compares to concerns about government misusing AI and whether self-regulation is possible with government, or whether we need to worry about that at all.

Matt: Right. This is actually a subject that's very much at the forefront of my mind at the moment because that's the next paper that I'm planning to write, is kind of a followup to a paper that I wrote a couple years ago on regulating artificial intelligence systems. It's not so much that I think that I have great faith in corporations and businesses to act in the best interest of society. There are serious problems with self-regulation that are difficult to overcome, and one of those is what I call the ‘fox guarding the hen house’ problem. An industry is going to come up with rules that govern it, well they're going to come up with rules that benefit the industry rather than rules that benefit the broader public, or at least that's where the incentives for them lie. That's proven to be basically an insurmountable obstacle for self-regulation in the vast majority of sectors, really over the past couple of centuries. But that time scale, the past couple of centuries, is very important because the past couple of centuries is when the industrial revolution happened. That was really a sea change moment, not just in our economy, and in our society, but also in the law. Basically every regulatory institution that you think of today, whether it is a government agency, whether it is the modern system of products liability in the United States, whether it's legislative oversight of particular industries. That is all essentially a creation of the post-industrial world. Governments realized that as large companies started to increasingly dominate sectors of the economy, the only way to effectively regulate an increasingly centralized economy is to have a centralized form of regulation.

Now, we are living, I think in an age, with the advent of the internet, that is an inherently decentralizing force. I wouldn't say that it is so much that I'm confident that companies are going to be able to self-regulate, but in a decentralizing world, we're going to have to think of new paradigms of how we want to regulate and govern the behavior of economic actors. Some of the old systems of regulation or risk management that existed before the industrial revolution might make more sense now that we're having a trend towards decentralization rather than centralization. It might make sense to reexamine some of those decentralized forms of regulation and one of those is industry standards and self-regulation.

I think that one reason why I am particularly hopeful in the sphere of AI is that there really does seem to be kind of a broad interest among the largest players in the AI world to proactively come up with rules of ethics and transparency in many ways that we generally just haven't seen in the age since the Industrial Revolution. One of the reasons that self-regulation hasn't worked isn't just the ‘fox guarding the hen house’ problem, it's also that companies are inherently skeptical of letting on anything that might let other companies know what they're planning to do.

There obviously is certainly a good deal of that. People are not making all AI's code open source right now, but there is a much higher tolerance for transparency it seems like in AI than there is in previous generations of technology. I think that's good because, again in an increasingly decentralized world, we're going to need to come up with decentralized forms of risk management, and the only way to effectively have decentralized risk management is to make sure that the key safety critical aspects of the technology are understandable and known by the individuals or groups that are tasked with regulating it.

Ariel: Does that also translate to concerns about, say government misusing autonomous weapons or misusing AI to spy on their citizens or something? How can we make sure that governments aren't also causing more problems than they're helping?

Matt: Well that is a question that humans have been asking themselves for the past 5,000 plus years. I don't think we're going to have a much easier time with it, at least in the early days of the age of intelligent machines than we have in the past couple of centuries. Governments have a very strong interest in basically looking out for the interests of their countries. One kind of macro trend unfortunately in the world stage today is increasingly nationalist tendencies. That leads me to be more concerned than I would have been 10 years ago that these technologies are going to be co-opted by governments, and kind of ironically that it's going to be the governments rather than the companies that are the greatest obstacle to transparency because they will want to establish some sort of national monopoly on the technologies within their borders.

It's kind of an interesting dynamic. I feel like Google DeepMind and Microsoft and Facebook and Amazon and a lot of these companies are much higher on the idea of encouraging transparency and multinational cooperation than governments are, and that's exactly the opposite trend of what we have come to expect over at least the last several decades.

Ariel: Ryan, did you want to weigh in on some of that?

Ryan: Yeah, I think that it is an interesting reversal in the trend, and it makes me wonder how much of that is due to the fusion of the internet and its ability to make that decentralized management or decentralized cooperation possible. There's one thing that I would like to add, and it's sort of double-edged. I think that international norms of cooperation can be valuable. They're not totally toothless. We see this for example when it comes to the Ottawa Treaty, the treaty that banned landmines. The United States is not a signatory to the Ottawa Treaty that banned anti-personnel landmines, but because so many other countries are, there exists a very strong norm, the sort of informal stigma that's attached to it, even for the United States that if we used something like anti-personnel landmines in battle, we'd face the kind of backlash or the kind of criticism that's probably equivalent to if we had been signatories of that treaty, or roughly equivalent to it.

So international norms of cooperation and these international agreements, they're good for something, but we often find that they're also fragile at the same time. For example, in much of the western world, there has existed, and there still exists a kind of informal agreement that we're not going to experiment on human embryos, or we're not going to experiment by modifying the genetics of human embryos, say for example with the CRISPR enzyme that we know can be used to modify the genetic sequence of embryos.

So it was a bit of a shock a year or two ago when some Chinese scientists announced that they were doing just that. I think it was a bit of a wake up call to the West to realize, oh, we have this shared understanding of moral values and this shared understanding of things like, we'd call it the dignity of human life or something like that, and it's deeply rooted probably in the Judeo-Christian tradition, and it goes back several thousands of years, and it unites these different nation-states because it's part of our shared cultural heritage. But those understandings aren't universal, and those norms aren't universal, and I think it was a valuable reminder that when it comes to things that are as significant as say, modifying the human genome with enzymes like CRISPR or probably with autonomous weapons and artificial intelligence more generally, those kinds of inventions, they're so significant and they have such profound possibilities for reshaping human life that I think we should be working, very stridently to try to arrive at some international agreements that are not just toothless and not just informal.

Ariel: I sort of want to go in a different direction and ask about fake news, which seems like it should have been a trivial issue, and yet it seems to be attributed to impacting things like presidential elections. Obviously, there are things like libel and slander laws, but I don't really know how those apply to the issues like fake news, and I'm interested, fearful, about the idea that as AI technologies improve, we're going to be able to do things like take videos of people speaking and really change what they're saying, so it sounds like someone said on video something completely different, which will exacerbate the fake news problem. I was really interested in what you both think of this from a legal standpoint and an ethical standpoint.

Matt: Fake news is a very - it almost is distilled to the essence, the issue of decentralization and the problems that kind of the democratization of society that the internet has brought about, and the risks associated with that. And the reason is not that long ago, in my parents' generation, the way that you got the news was from the newspaper, that was the kind of sole-trusted source of news for many people. In many ways, that was a great system because everybody had this shared set of facts, this shared set of ideas about what was going on in the world. Over time, that got diluted somewhat with the advent of TV and increasing reliance on radio news as well, but by and large, there was still a fairly limited number of outlets, all governed by similar journalistic standards, and all adhering to broadly shared norms of conduct. The rise of the internet has opened up an opportunity for that to get completely tossed on the wayside, and I think that the fake news debacle that really happened in the presidential elections of last year is a perfect example of that.

Because there are now so many different sources for news, there are so many news outlets available on the internet, there are so many different sources of information that people can access that don't even purport to be news, and those that do purport to be news, but really are just opinions or in some cases completely made up stories. In that sort of environment, it becomes increasingly difficult to decide what is real and what is not, and what is true and what is not. And there is a loss that we are starting to see in our society of that shared knowledge of facts. We're not starting to see it. We've already lost a good bit of that. There are literally different sets of not just worldviews, but of worlds, that people see around them. And the fake news problem is really that a lot of people are going to believe any story that they read that fits with their preexisting conception of what the world is.

I don't know that the law has a good answer to that. Part of the reason is that a lot of these fake news websites aren't commercial in nature. They're not intentionally trying to make large amounts of money on this, so even if a fake news story does monumental damage, the person who created that content is probably not going to be a source of accountability, you’re not going to be able to recoup the damages to your reputation from that person or that entity. And it seems kind of unfair to blame platforms like Facebook, frankly, because it would almost be useless to have a Facebook where Facebook had to vet every link that somebody sent out before it could get sent out over their servers. That would eliminate the real time community-building that Facebook tries to encourage.

So it's really an insurmountable problem. It's really I think an area where it's just difficult for me to envision how the law can manage that, at least unless we come up with new regulatory paradigms that reflect the fact that our world is going to be increasingly less centralized than it has been during the industrial age.

Ariel: And Ryan?

Ryan: Yeah. I can say a little bit about that. I'll try to keep it at arm's length because at least for me, it's a very frustrating topic. I remember going back to the 2008 election, is where I can pinpoint at least my awareness of this and my worry about this, where I think The New York Times ran a story about John McCain that was not very favorable, and I think it was a story that alleged that he had a secret mistress. I remember John McCain's campaign manager going on TV and saying that The New York Times was hardly a journalistic organization. I thought this is not a good sign. This is not a good sign if we're no longer agreeing on the facts of the situation and then disagreeing on what is to be done, say disagreeing how we make trade offs between personal initiative versus redistribution to help the less fortunate or something, the sort of classic conversations that America's been having with itself for hundreds of years. We're not even having those kinds of conversations because we can't even agree on what the world looks like out there, and we can't even agree about who's a trusted messenger. The paper of record, The New York Times, The Washington Post, they've been relentlessly assailed as unreliable sources, and that's really a troubling development. I think it crested or it came to a head, although that implies that it's now entering a trough, that it's now on the downswing, and I'm not sure that that's true, but at least there was a kind of climax of this in the 2016 election where fake news stories were some of the most widely shared articles on Facebook for example, and I think that this plays into some human weaknesses like confirmation bias and the other cognitive biases that we've all heard of. Personally, I think it's just unmitigated catastrophe for the public discourse in the country.

I think we might have our first official disagreement between me and Matt in the time that we've been speaking because I'm slightly less sympathetic to Facebook and their defense. Mark Zuckerberg has said that he wants Facebook to be everybody's "primary news experience." And they have the possibility to control, not which news stories appear on their site, but which news stories are promoted on their site, and they've exercised that capacity. They exercised that capacity last year in the early days of the campaign, and they attracted a great deal of controversy, and they backed off from that. They removed their human moderators from the trending news section, and three days later, we find fake news stories in the top trending news sections, when they're being moderated by algorithms.

I'm a little less sympathetic to Facebook because I don't think they can play the role that would have traditionally been filled by a newspaper editor and profit off it and declare it as their intention to fill that role in society and then totally wipe their hands of any kind of obligation to shape the public discourse responsively. So I wish that they were doing more. I wish that they were at least accepting the responsibility for that.

I hurry to add that they have accepted responsibility recently, and they've implemented some new features to try to curtail this, but as some of my other colleagues have pointed out, it's not obvious how optimistic we should be about those features. So for example - I laugh, but it's a grim ironic laughter, because one of Facebook's features to try to combat this is to allow users to flag stories as suspicious, but if course, if users were reliable detectors of which stories were accurate and which were fake, we wouldn't be in this predicament. So it's not clear how much that can really do to solve the problem.

I think this is a pretty significant tragedy for American political discourse, especially when the stakes are so high, and they're only getting higher with things like climate change for example, or income inequality or the kinds of things that we've been talking about today. It's more important than ever that Americans are able to have mature, intelligent, informed, careful conversations about the matters that affect them and affect several billion other people that we share this planet with. I don't however, have a quick and easy solution. I'm afraid that for now, I'm just left wringing my hands with worry, and there's not much else I can think to do about it, at least for the time being.

Ariel: And Matt?

Matt: I think that what Facebook is going to have a problem with is the same issue that the operators of any internet site have with hacker bots. Those internet CAPTCHAs that you see when you try to log in to a website that tries to test if you're human. There's a reason that they change every few months it seems like, and that's because people have figured out how to get past the previous ways of filtering out these bots that were not actual people trying to log in.

I think that you're going to see unfortunately a similar phenomenon with Facebook. They could make their best possible efforts to root out fake news, and I could easily see that not being enough. And it's because their user base is so filled with people who are only really looking to see affirmation of their own worldview. If that's the mindset that we have in society, I don't know that Facebook is really going to be able to design their way around that. And in many ways, I think that kind of the loss of the shared worldview is going to raise even more thorny and difficult to resolve legal and ethical questions than the rise of artificial intelligence.

Ariel: Okay. Is there anything else that you think is important for people to know about that you're thinking about or something along those lines? Ryan?

Ryan: Yeah, I think if I could just leave people with just sort of generic injunction or just a generic piece of advice, one of the contributions that we in moral philosophy see ourselves making to these conversations is not always offering clear cut answers to things. You'll find that philosophers, surprise, surprise, often disagree among themselves about the right answers to these things. But there is still a great value in appreciating when we're running roughshod over questions that we didn't even know existed. That is what I think is one of the valuable contributions that we can make here, is to think carefully about the way that we behave, the way that we design our machines to interact with one another and the kinds of effects that they'll have on society. And I would just caution people to be on the lookout for moral questions and moral assumptions that are being made that are lurking in places where we didn't expect them to be hiding. It's been a continual frustration that pops up every now and then to hear people wave their hand at these things or to try to wave them away when the moral philosophers are busy pounding their fists.

It's been something we've been trying to do is to get out and engage with the public and engage with manufacturers or creators of artificial intelligence more to help them realize that these are very serious questions. They're not easy to answer. They're controversial, and they raise some of the deepest questions that we've been dedicating ourselves to, what our profession has been focused on for thousands of years. They're worth taking seriously, and they’re worth thinking about. I will say that it's endearing and reassuring that people are taking these questions very seriously when it comes to artificial intelligence, and I think that the advances we've seen in artificial intelligence in the last couple of years have been the impetus for that, the impetus for this sort of turn towards the ethical implications of the things we create.

Ariel: Thank you. And Matt?

Matt: I'm also heartened by the amount of interest that not just people in the legal world, but in many different disciplines have taken in the legal, ethical and policy implications of AI. I think that it's important to have these open dialogues where the issues that are created by not just artificial intelligence, but by the kind of parallel changes in society that are occurring - how that impacts people's lives and what we can do to make sure that it doesn't do us more harm than good.

I'm very glad that I got to hear Ryan's point of view on this. I think that a lot of these issues, lawyers and legal scholars could very much use to think about the broader ethical questions behind it, if for no other reason than because I think that the law is becoming a less effective tool for managing the societal changes that are happening. And I don't think that that will change unless we think through the ethical questions and the moral dilemmas that are going to be presented by a world in which decisions and actions are increasingly undertaken by machines rather than people.

Ariel: Excellent. Well thank you very much. I really enjoyed talking with both of you.

Ryan: Yeah, thank you.

Matt: Thanks.

View transcript

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram