Podcast: Law and Ethics of Artificial Intelligence

The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State, where he studies the ethics of technology.

In this podcast, we discuss accountability and transparency with autonomous systems, government regulation vs. self-regulation, fake news, and the future of autonomous systems.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety above or read the full transcript here.

Ariel: I typically think of ethics as the driving force behind law. As such, Ryan, I was hoping you could talk about the ethical issues facing us today when it comes to artificial intelligence.

Ryan: Broadly speaking, the mission of both ethics and law might be to discover how to best structure life within a community and to see to it that that community does flourish once we know certain truths. Ethics does some of the investigation about what kinds of things matter morally, what kinds of lives are valuable, how should we treat other people. Law does an excellent job of codifying those things and enforcing those things.

One of the easiest ways of telling whether a decision is a moral decision is whether it stands to make some people better off and some people worse off. And we’re seeing that take place right now with artificial intelligence. That adds new wrinkles to these decisions because oftentimes the decisions of AI are opaque to us, they’re difficult to understand, they might be totally mysterious. And while we’re fascinated by what AI can do, I think the developers of AI have implemented these technologies before we fully understand what they’re capable of and how they’re making decisions.

Ariel: Can you give some examples of that?

Ryan: There was an excellent piece by ProPublica about bias in the criminal justice system, where they use risk assessment algorithms to judge, for example, a person’s probability of re-committing a crime after they’re released from prison.

ProPublica did an audit of this software, and they found that not only does it make mistakes about half the time, but it was systematically underestimating the threat from white defendants and systematically overestimating the threat from black defendants. White defendants were being given leaner sentences, black defendants as a group were being given harsher sentences.

When the company that produced the algorithm was asked about this, they said look, it takes in something like 137 factors, but race is not one of them’. So it was making mistakes that were systematically biased in a way that was race-based, and it was difficult to explain why. This is the kind of opaque decision making that’s taking place by artificial intelligence.

Ariel: As AI advances, what are some of the ethical issues that you anticipate cropping up?

Ryan: There’s been a lot of ink spilled about the threat that automation poses to unemployment. Some of the numbers coming out of places like Oxford are quite alarming. They say as many of 50% of American jobs could be eliminated by automation in the next couple decades.

Besides the obvious fact that having unemployed people is bad for society, it raises more foundational questions about the way that we think about work, the way that we think about people having to “earn a living” or “contribute to society.” The idea that someone needs to work in order to be kept alive. And most of us walk around with some kind of moral claim like this in our back pocket without fully considering the implications.

Ariel: And Matt, what are some of the big legal issues facing us today when it comes to artificial intelligence?

Matt: The way that legal systems across the world work is by assigning legal rights and responsibilities to people. The assumption is that any decision that has an impact on the life of another person is going to be made by a person. So when you have a machine making the decisions rather than humans, one of the fundamental assumptions of our legal system goes away. Eventually that’s going to become very difficult because there seems to be the promise of AI displacing human decisionmakers out of a wide variety of sectors. As that happens, it’s going to be much more complicated to come up with lines of legal responsibility.

I don’t think we can comprehend what society is going to be like 50 years from now if a huge number of industries ranging from medicine to law to financial services are in large part being run by the decisions of machines. At some point, the question is how much control can humans really say that they still have.

Ariel: You were talking earlier about decision making with autonomous technologies, and one of the areas where we see this is with self driving cars and autonomous weapons. I was hoping you could both talk about the ethical and legal implications in those spheres.

Matt: Part of the problem with relying on law to set standards of behavior is that law does not move as fast as technology does. It’s going to be a long time before the really critical changes in our legal systems are changed in a way that allows for the widespread deployment of autonomous vehicles.

One thing that I could envision happening in the next 10 years is that pretty much all new vehicles while they’re on an expressway are controlled by an autonomous system, and it’s only when they get off an expressway and onto a surface street that they switch to having the human driver in control of the vehicle. So, little by little, we’re going to see this sector of our economy get changed radically.

Ryan: One of my favorite philosophers of technology [is] Langdon Winner. His famous view is that we are sleepwalking into the future of technology. We’re continually rewriting and recreating these structures that affect how we’ll live, how we’ll interact with each other, what we’re able to do, what we’re encouraged to do, what we’re discouraged from doing. We continually recreate these constraints on our world, and we do it oftentimes without thinking very carefully about it. To steal a line from Winston Churchill, technology seems to get halfway around the world before moral philosophy can put its pants on. And we’re seeing that happening with autonomous vehicles.

Tens of thousands of people die on US roads every year. Oftentimes those crashes involve choices about who is going to be harmed and who’s not, even if that’s a trade-off between someone outside the car and a passenger or a driver inside the car.

These are clearly morally important decisions, and it seems that manufacturers are still trying to brush these aside. They’re either saying that these are not morally important decisions, or they’re saying that the answers to them are obvious. They’re certainly not always questions with obvious answers. Or if the manufacturers admit that they’re difficult answers, then they think, ‘well the decisions are rare enough that to agonize over them might postpone other advancements in the technology’. That’s a legitimate concern, if it were true that these decisions were rare, but there are tens of thousands of people killed on US roads and hundreds of thousands who are injured every year.

Ariel: I’d like to also look at autonomous weapons. Ryan, what’s your take on some of the ethical issues?

Ryan: There could very well be something that’s uniquely troubling, uniquely morally problematic about delegating the task of who should live and who should die to a machine. But once we dig into these arguments, it’s extremely difficult to pinpoint exactly what’s problematic about killer robots. We’d be right to think, today, that machines probably aren’t reliable enough to make discernments in the heat of battle about which people are legitimate targets and which people are not. But if we imagine a future where robots are actually pretty good at making those kinds of decisions, where they’re perhaps even better behaved than human soldiers, where they don’t get confused, they don’t see their comrade killed and go on a killing spree or go into some berserker rage, and they’re not racist, or they don’t have the kinds of biases that humans are vulnerable to…

If we imagine a scenario where we can greatly reduce the number of innocent people killed in war, this starts to exert a lot of pressure on that widely held public intuition that autonomous weapons are bad in themselves, because it puts us in the position then of insisting that we continue to use human war fighters to wage war even when we know that will contribute to many more people dying from collateral damage. That’s an uncomfortable position to defend.

Ariel: Matt, how do we deal with accountability?

Matt: Autonomous weapons are going to inherently be capable of reacting on time scales that are shorter than humans’ time scales in which they can react. I can easily imagine it reaching the point very quickly where the only way that you can counteract an attack by an autonomous weapon is with another autonomous weapon. Eventually, having humans involved in the military conflict will be the equivalent of bringing bows and arrows to a battle in World War II.

At that point, you start to wonder where human decision makers can enter into the military decision making process. Right now there’s very clear, well-established laws in place about who is responsible for specific military decisions, under what circumstances a soldier is held accountable, under what circumstances their commander is held accountable, on what circumstances the nation is held accountable. That’s going to become much blurrier when the decisions are not being made by human soldiers, but rather by autonomous systems. It’s going to become even more complicated as machine learning technology is incorporated into these systems, where they learn from their observations and experiences in the field on the best way to react to different military situations.

Ariel: Matt, in recent talks you mentioned that you’re less concerned about regulations for corporations because it seems like corporations are making an effort to essentially self-regulate. I’m interested in how that compares to concerns about government misusing AI and whether self-regulation is possible with government.

Matt: We are living in an age, with the advent of the internet, that is an inherently decentralizing force. In a decentralizing world, we’re going to have to think of new paradigms of how to regulate and govern the behavior of economic actors. It might make sense to reexamine some of those decentralized forms of regulation and one of those is industry standards and self-regulation.

One reason why I am particularly hopeful in the sphere of AI is that there really does seem to be a broad interest among the largest players in AI to proactively come up with rules of ethics and transparency in many ways that we generally just haven’t seen in the age since the Industrial Revolution.

One macro trend unfortunately in the world stage today is increasingly nationalist tendencies. That leads me to be more concerned than I would have been 10 years ago that these technologies are going to be co-opted by governments, and ironically that it’s going to be governments rather than companies that are the greatest obstacle to transparency because they will want to establish some sort of national monopoly on the technologies within their borders.

Ryan: I think that international norms of cooperation can be valuable. The United States is not a signatory to the Ottawa Treaty that banned anti-personnel landmines, but because so many other countries are, there exists the informal stigma that’s attached to it, that if we used anti-personnel landmines in battle, we’d face backlash that’s probably equivalent to if we had been signatories of that treaty.

So international norms of cooperation, they’re good for something, but they’re also fragile. For example, in much of the western world, there has existed an informal agreement that we’re not going to experiment by modifying the genetics of human embryos. So it was a shock a year or two ago when some Chinese scientists announced that they were doing just that. I think it was a wake up call to the West to realize those norms aren’t universal, and it was a valuable reminder that when it comes to things that are as significant as modifying the human genome or autonomous weapons and artificial intelligence more generally, they have such profound possibilities for reshaping human life that we should be working very stridently to try to arrive at some international agreements that are not just toothless and informal.

Ariel: I want to go in a different direction and ask about fake news. I was really interested in what you both think of this from a legal and ethical standpoint.

Matt: Because there are now so many different sources for news, it becomes increasingly difficult to decide what is real. And there is a loss that we are starting to see in our society of that shared knowledge of facts. There are literally different sets of not just worldviews, but of worlds, that people see around them.

A lot of fake news websites aren’t intentionally trying to make large amounts of money, so even if a fake news story does monumental damage, you’re not going to be able to recoup the damages to your reputation from that person or that entity. It’s an area where it’s difficult for me to envision how the law can manage that, at least unless we come up with new regulatory paradigms that reflect the fact that our world is going to be increasingly less centralized than it has been during the industrial age.

Ariel: Is there anything else that you think is important for people to know?

Ryan: There is still a great value in appreciating when we’re running roughshod over questions that we didn’t even know existed. That is one of the valuable contributions that [moral philosophers] can make here, is to think carefully about the way that we behave, the way that we design our machines to interact with one another and the kinds of effects that they’ll have on society.

It’s reassuring that people are taking these questions very seriously when it comes to artificial intelligence, and I think that the advances we’ve seen in artificial intelligence in the last couple of years have been the impetus for this turn towards the ethical implications of the things we create.

Matt: I’m glad that I got to hear Ryan’s point of view. The law is becoming a less effective tool for managing the societal changes that are happening. And I don’t think that that will change unless we think through the ethical questions and the moral dilemmas that are going to be presented by a world in which decisions and actions are increasingly undertaken by machines rather than people.

This podcast and transcript were edited by Tucker Davey.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.