Skip to content

Patrick Lin Interview

Published:
April 13, 2017
Author:
Ariel Conn

Contents

The following is an interview with Patrick LinĀ about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. LinĀ is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is an associate philosophy professor. He regularly gives invited briefings to industry, media, and government; and he teaches courses in ethics, political philosophy, philosophy of technology, and philosophy of law.

Q:Ā From your perspective, what were the highlights of the conference?

ā€œThere’s so many, I don’t know if I can even limit myself to a few. I guess the top level highlights for me were about the people. Not that the content wasn’t interesting, but I’m already familiar with a lot of the positions and arguments; so for me, the most interesting thing was to just be around, to personally and finally meet in real life a lot of the folks that I’ve only known online, like Roman Yampolskiy, or by reputation only.

ā€œAlso, this particular conference you guys had was unique, far different from other AI conferences I’ve been to, in that you had such an incredible concentration of thought leaders from diverse fields, not just technologists but economists, ethicists, and so on. This was a rare meeting of minds. If you were Skynet looking to travel back in time to do the most damage towards AI safety, you might want to hit Asilomar in January 2017. I think that says something.

ā€œIt was just an incredible nexus of luminaries and industry captains, and just people from all kinds of fields. Some of my favorite speakers there were Jeff Sachs, the economist from Columbia, Anthony Romero, ACLU ā€“ these are people not traditionally involved in this AI conversation. Anca Dragan from Berkeley had a great piece on human-robot interaction, and you and I spoke about this at the conference, I’d love to see more engagement of human-robot or human-computer interface issues; I think those will be big.

ā€œAlso, take Joseph Gordon-Levitt and his wife, Tasha, whoā€™s a technologist in her own right: they’re great people, not people I would normally meet at a standard AI conference, but they’re also important here. Some background: I’ve been working with military technology and people from DoD for close to 10 years now, and this theme keeps coming up. There’s a difference between hard power and soft power. Hard power are things like sending in military, sending in funding, and just really trying to exert influence over other nations.

ā€œThere’s also soft power; soft power is more persuasive, friendlier things you could do, for instance, increase tourism and increase goodwill to help bridge people together. I think that’s what’s great about having someone like Joseph Gordon-Levitt there, in that he represents Hollywood and can help steer AI globally. I’ve always thought that Hollywood was maybe America’s greatest source of soft power: it’s our greatest way of influencing other cultures, opening them up to our values, opening them up to the idea of democracy. We do this through movies. This is hard to tell when you’re inside the US, but outside the US, Hollywood has such a profound impact, whether they realize it or not. Even inside the US, think about the effect of movies on our national consciousness, like Ex Machina, Her, Star Wars, 2001: A Space Odyssey, and many others, on how we think about AI an robots. Hollywood ā€“ that is, storytelling ā€“ is one of our best and most effective weapons of change.

ā€œThe conference was also a good chance for me to catch up with old friends, people I’ve known for a long time ā€“ Wendell Wallach, Ryan Calo, and many others ā€“ we run in the same circles, but we don’t meet up all that often. The conference has already sparked many new ideas and ways to collaborate. Now, I’m already starting to do that, just connecting with people I met at the conference, and hopefully projects and funding will materialize.ā€

ARIEL: ā€œThat’s awesome, I’m glad you liked it. I’ve noticed most of the people that we talked to, it’s been the people and the interactions at the conference seem to be sort of the big highlights, so that was pretty nice.ā€

PATRICK: ā€œThere were a few times where I heard things that were just really surprising to me. As an ethicist, I’m not so much in touch with a lot of the technical details, so it was good to hear the technical details straight from the horse’s mouth, from people on the frontlines of this. Also, a few things really stood out; for instance Ray Kurzweil, when he was on that super-panel, he basically said, I’m paraphrasing, ā€˜Look, even if we had perfect AI today, there would still be a whole load of problems. AI safety is not just a technical problem that can be solved with clever programming, but even if you have perfect AI, it’s a social and political problem to guard against abuse of that power.ā€™

ā€œIt isn’t just about aligning values or working out the programming, but this is also very much a non-technical problem.ā€

Q: Why did you decide to sign the Asilomar AI Principles document?

ā€œWell, so I know in recent months, or in the past year, there have been various groups publishing their various principles, and those look like great efforts too. But this is still an emerging area to conceptualize these principles. I think what Future of Life had done, as I would put it, is something like a meta-analysis of these various proposals. It seemed to me that you guys weren’t just reinventing the wheel, you’re not just putting out another set of principles like Stanford did or IEEE or the White House.

ā€œWhat’s needed now is a meta-analysis, someone to consolidate these principles and arrive at a best-of-breed set of principles. That’s why I support it. I also think they are top level, ambitious principles, and it’s going to take work to clarify them, but at least at the top level, they seem headed to the right direction.ā€

Q: As a non-AI researcher, why do you think AI researchers should weigh in on issues like those that were brought up at the conference and in the principles document?

ā€œAt the conference, I met Lord Martin Rees. Before I met him and for a couple years now, I’ve usually included a quote from him in my talks about technology ethics. This came from an op-ed he wrote in the Guardian about 10 years ago. Martin Rees was talking about the responsibilities that scientists have. He says, ā€˜Scientists surely have a special responsibility. It is their ideas that form the basis of new technology. They should not be indifferent to the fruits of their ideas. They should forego experiments that are risky or unethical.ā€™

ā€œI think he gets it right, that scientists and engineers, they’re responsible for creating these products that could have good uses and bad uses. It’s not just causal responsibility, but it’s also moral responsibility. If it weren’t for them, these products and services and effects wouldn’t have happened. In a real way, they are responsible for these outcomes, and that responsibility means that they need to be engaged as early as possible in steering their inventions or products or services in the right direction.

ā€œYou know, maybe this isnā€™t a popular opinion in engineering, but a lot of engineers and scientists want to say, ā€˜Look, we’re just doing pure research here, and that’s why we have the lawyers and the ethicists and other folks to sort it out.ā€™ I think that’s right and wrong. Something works about a division of labor: it’s efficient and you get to focus on your competitive advantage and your skillsets, but we can’t be totally divorced from our responsibility. We can’t fully handoff a responsibility or punt it to other people. I do think that technologists have a responsibility to weigh in. It’s a moral responsibility.

ā€œA lot of them don’t do it, and that’s why it’s all too rare for a gathering like what you guys had, where you had socially-minded technologists who understood that their work is not just groundbreaking, but it could have some serious positive and negative impacts on society, and they’re worried about that, which is good, because they should be. I see this as a natural move for technologists to feel responsibility and to be engaged, but unfortunately, not everyone sees it that way, and they’re not aware of the limits of their design.

ā€œThat could get them in trouble if their programming decisions lead to some bad crash that the public is outraged. They can’t just say, ā€˜Hey, we’re not ethicists. We’re just doing what the data says, or we’re just doing what maximizes the good.ā€™ Well, that means you’re doing ethics, implicitly. A lot of scientists are engineers, whether they know it or not, they’re already engaged in ethics and values. It’s already baked into a lot of their work. You could bake ethics into design. A lot of people think technology is amoral or neutral, but I donā€™t believe that. I think ethics can be baked into the design of something.

ā€œIn most cases, it’s subtle, it might not make a difference; but in other cases, it’s pretty clear. For instance, there have been health apps coming out of Silicon Valley that fail to track say, women’s periods. A health issue or body issue for half the population is just totally ignored because the man programming it didn’t think about these use-cases. I think it’s wrong, it’s a myth to say that technology is neutral. Sure, in most cases, it’s too subtle to tell, but there’s definitely ethics built in, or can be built into the design of technology.ā€

Q. Explain what you think of the following principles:

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

“This sounds great in ā€˜principleā€™ but you need to work it out. For instance, it could be that thereā€™s this catastrophic risk thatā€™s going to affect everyone in the world ā€“ it could be AI or an asteroid or something, but itā€™s a risk that will affect everyone ā€“ but the probabilities are tiny, 0.000001 percent, letā€™s say. Now if you do an expected utility calculation, these large numbers are going to break the formula every time. There could be some AI risk thatā€™s truly catastrophic, but so remote that if you do an expected utility calculation, you might be misled by the numbers.

“I agree with it in general, but part of my issue with this particular phrasing is the word ā€˜commensurate.ā€™ Commensurate meaning an appropriate level that correlates to its severity. So I think how we define commensurate is going to be important. Are we looking at the probabilities? Are we looking at the level of damage? Or are we looking at expected utility? The different ways you look at risk might point you to different conclusions. Iā€™d be worried about that. We can imagine all sorts of catastrophic risks from AI or robotics or genetic engineering, but if the odds are really tiny, and you still want to stick with this expected utility framework, these large numbers might break the math. Itā€™s not always clear what the right way is to think about risk and a proper response to it.”

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

ā€œYeah I think I generally agree with this research goal. Given the potential of AI to be misused or abused, it’s important to have a specific positive goal in mind. Now again, I think where it might get hung up is what this word ā€˜beneficialā€™ means. If we’re directing it towards beneficial intelligence, we’ve got to define our terms; we’ve got to define what beneficial means, and that to me isn’t clear. It means different things to different people, and it’s rare that you could benefit everybody.

ā€œMost of the times, you might have to hurt or go against the interest of some groups in order to maximize benefits. Is that what we’re talking about? If we are, then again, we’re implicitly adopting this consequentialist framework. It’s important to understand that, because if you don’t know you’re a consequentialist, then you don’t know the limits of consequentialism, and there are important limits there. That means you don’t really understand whether it’s appropriate to use consequentialism in this area.”

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

ā€œI think building a cohesive culture of cooperation is going to help in a lot of things. It’s going to help accelerate research and avoid a race, but the big problem I see for the AI community is that there is no AI community, it’s fragmented, it’s a Frankenstein-stitching together of various communities. You have programmers, engineers, roboticists; you have data scientists, and it’s not even clear what a data scientist is. Are they people who work in statistics or economics, or are they engineers, are they programmers?

ā€œData science has an identity problem, and I think AI definitely has an identity problem. Compare let’s say an AI programmer (or designer or whatever) with a civil engineer or an electrical engineer or architect or teacher or lawyer — these other fields, they have a cohesive professional identity. There’s required professional training, there’s required educational components, there’s licensing requirements, there’s certification requirements. You don’t have that in software engineering or in AI. Anyone who can write code is effectively a programmer. You don’t have to have graduated from high school, or you could have graduated from Oxford or Stanford, you could be working out of the basement in your mother’s house, or you could be working at a fancy Google or Facebook lab.

ā€œThe profession is already so fragmented. There’s no cohesive identity, and that’s going to be super challenging to creating a cohesive culture that cooperates and trusts and is transparent, but it is a worthy goal. I think it’s an immense challenge, especially for AI because there’s no set educational or professional requirements. That also means it’s going to be really hard to impose or to set a professional code of ethics for the industry if there’s no clear way of defining the industry. Architects, teachers, doctors, lawyers: they all have their professional codes of ethics, but that’s easy because they have a well-delineated professional culture.ā€

ARIEL: ā€œDo you think some of that is just because these other professions are so much older? I mean, AI and computer science are relatively new.ā€

PATRICK: ā€œYeah, I think that’s part of it. Over time, we might see educational and professional standards imposed on programmers, but also AI and programming, or at least AI and data science draws from different fields inherently. It’s not just about programming, it’s about learning algorithms, and that involves data. Once you get data, you have this interpretation problem, you have a statistics problem, and then once you worry about impacts, then you have social, political, psychological issues to attend to. Just by the nature of AI in particular, it draws from such diverse fields that, number one, you have to have these diverse participants in order to have a comprehensive discussion, but also, number two, because they’re so diverse, they’re hard to pull together into a cohesive culture.ā€

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

ā€œI would lump race avoidance into the research culture. Yeah, it’s probably good to avoid an arms race. Competition is good, and an arms race is bad, but how do you get people to cooperate to avoid arms race? Well, you’ve got to develop the culture first, but developing the culture is hard because of the reasons I already talked about.ā€

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

ā€œWho can object to the safety one? Again, I think it’s not clear that this principle recognizes that safety is not just a technical problem. I mentioned Ray Kurzweil and Sam Harris talking about this onstage that even with perfect AI, I think Sam Harris said this, ā€˜Even if God dropped down perfect AI to us today, what does that mean? Does that really solve our problems and worries about AI? No. It could still be misused and abused in a number of ways.ā€™ When Stuart Russell talks about aligning AI to values, it seems to be this big, open question, ā€˜Well, what are the right values?ā€™

ā€œIf you just make AI that can align perfectly with whatever values you set it to, well the problem is, people can have a range of values, and some of them are bad. Just merely matching AI, aligning it to whatever value you specify I think is not good enough. It’s a good start, it’s a good big picture goal to make AI safe, and the technical element is a big part of it; but again, I think safety also means policy and norm-setting.ā€

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

ā€œShared benefit is interesting, because again, this is a principle that implies consequentialism, that we should think about ethics as satisfying the preferences or benefiting as many people as possible. That approach to ethics isn’t always right. I mean, yeah, a lot of our decisions are based on consequentialism: we make policy decisions based on pros and cons and numbers, and we might make personal decisions based on that. Should I have the pizza or the hot dog? Which one’s going to make me happier? Well, maybe I’m lactose intolerant, and maybe even though pizza’s really yummy and makes me happy now, I’ve got to think of my happiness later on.

ā€œConsequentialism often makes sense, so weighing these pros and cons makes sense, but that’s not the only way of thinking about ethics. Consequentialism could fail you in many cases. For instance, consequentialism might green-light torturing or severely harming a small group of people if it gives rise to a net increase in overall happiness to the greater community. If you then look at things like the Bill of Rights, if you think that we have human rights or we have duties and obligations, these are things that aren’t so much about quantifiable numbers. These are things that canā€™t easily fit into a consequentialist framework.

ā€œThat’s why I worry about the Research Goal Principle and Shared Benefit Principle. They make sense, but they implicitly adopt a consequentialist framework, which by the way is very natural for engineers and technologists to use, so they’re very numbers-oriented and tend to think of things in black and white and pros and cons, but ethics is often squishy. You deal with these squishy, abstract concepts like rights and duties and obligations, and it’s hard to reduce those into algorithms or numbers that could be weighed and traded off.ā€

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

ā€œYeah, this is tough. I wouldn’t say that you must always have meaningful human control in everything you do. I mean, it depends on the decision, but also I think this gives rise to new challenges. I say this when I give my talks on robot car ethics; say for instance that you get in a bad car accident, there’s a wreck in front of you or something jumps out in front of you and you just swerve reflexively into another car, or you swerve reflexively off into the shoulder.

ā€œNow, no one can blame you for that; it’s not premeditated, there’s no malice, there’s no forethought, it’s just a bad reaction. Now imagine an AI driver doing the exact same thing, and as reasonable as it may be for the AI to do the exact same thing, something feels different. The AI decision is scripted, it’s programmed deliberately, or if it’s learned from a neural net, it’s already predetermined what the outcome is going to be. If an AI driver hurts somebody, this now seems to be a premeditated harm, and there’s a big legal difference between an innocent accident where I harm someone, and a premeditated injury.

ā€œThis is related to the idea of human control and responsibility. If you don’t have human control, it could be unclear who’s responsible for it, but the context matters. It really does depend on what kind of decisions we’re talking about, that will help determine how much human control there needs to be.ā€

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

ā€œThis might be the most ambitious of all principles. It’s good, but it’s broad. I mean, it’s good to have it down on paper, even though it’s broad and ambitious, but this is going to be a hard one.

ā€œThe trick is to figure out what our common set of ethical values are. If we don’t have a common set, then we have to make a judgment of whose ethics we should be following. Of course everyone’s going to say, ā€˜Well, you know our ethics is the best, and follow us.ā€™ You’re going to run into some problems here. More generally, since the elections I’ve been really just pessimistic about the prospect of building ethics into anything. If the rule of law is eroding, if the rule of law no longer matters, then what the hell are we talking about ethics for? I think we’ve got bigger problems in the next few years than this, given the general trend is. But ethics and norms are still important. You don’t need to have bright-line laws to influence behavior. Again, think about Hollywood.

ā€œNorms and principles are much better than nothing, but if they are our primary line of defense, then we’re going to be in for a rough ride. Principles and ethics alone aren’t going to stop a lot of people from doing bad things. It could help forestall it, it could help postpone a disaster, but we need to see a lot more humanity and just social awareness and consciousness globally in order for us to really reign in this AI genie.ā€

ARIEL: ā€œWhat would your take on this had been if we were discussing this a year ago today?ā€

PATRICK: ā€œI’d be more optimistic. I’d definitely be more optimistic. For instance, if you look at the United Nationsā€”say, Heather Roff and Peter Asaroā€™s work on killer robotsā€”they’ve been making progress. It’s slow, but they’ve been making real progress. Just in December, the United Nations finally made it official to convene some meetings to figure out if we’ve got to regulate killer robots.

ā€œInternationally, I think we’re making progress, but if the US is a technology leader in the world, and we are, what happens in the US is going to be important — it’s going to set the tone for a lot of AI research around the world. Ethics and principles, all this comes top-down. If your company has a bad CEO, then naturally your employees are going to do bad things. It’s worse when you talk about political leaders or leaders of state. So if you don’t have ethical, moral leaders, then a lot of bad things are going to flow from that. Yeah, a year ago I think we’d be having a different conversation, unfortunately.ā€

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.org on April 13, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
20 July, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
20 July, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
19 April, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
19 April, 2017
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram