Skip to content

Bart Selman Interview

Published:
February 24, 2017
Author:
Ariel Conn

Contents

The following is an interview with Bart Selman about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Selman is a Professor of Computer Science at Cornell University, a Fellow of the American Association for Artificial Intelligence (AAAI) and a Fellow of the American Association for the Advancement of Science (AAAS).

Q: From your perspective, what were the highlights of the conference?

“I think the overall highlight for me was to see how much progress has been made on the whole AI safety question and beneficial AI since the Puerto Rico meeting. It was really exciting to see the workshop program where there was serious technical research into AI safety issues, and interesting progress. And at the conference I think what we saw was much more agreement among the participants and panelists about AI safety issues, the importance of it, the risk we have to address and the responsibility of AI researchers and AI companies that have large AI projects. So it was really about the progress in two years, it was very dramatic and very positive.”

Q: Why did you decide to sign the Asilomar AI Principles document?

“I think that if you look at the evolution since Puerto Rico, there’s a lot more agreement among the researchers that we have to think about these issues and that we have to have some general principles to guide research projects and development projects. I remember the lunch time discussions about the principles – most principles we would say, ‘well of course this sounds good.’ We had some where we had some modifications but most of them sounded very reasonable, very good principles to have.

“Society as a whole has never dealt with these kinds of principles, and I always viewed it as, ‘AI is a fairly academic discipline, there’s no direct societal impact.’ As that is changing so rapidly now, we need to have AI researchers and companies have a set of guidelines and principles to work by.”

Q: Why do you think AI researchers should weigh in on issues like those that were brought up at the conference and in the principles document?

“I think we need to weigh in because we are more aware of the deeper underlying technical issues. I actually saw – recently I was at a different conference on data science – the responsibility of data scientists on basic issues of privacy and bias, and all those kinds of things that can creep into machine learning systems. And an undergraduate student of ours asked at the end, ‘well the responsibility must lie with – say I work for Google – the responsibility must lie with Google. Why would I care as an undergraduate student about these ethical issues?’ But then the speaker made a very good point. He said, ‘well the person who writes the code, who develops the code, is often the only person who knows exactly what’s in the code, and knows exactly what is implemented in the machine learning system or the data mining system. So even though maybe in some abstract legal sense the employer will have final responsibility – financial or legal – it’s really the person building the system.’

“This was actually done in the context of issues with automation of drive-by-wire cars, and he showed an example of a piece of software that was really developed with all kinds of safety risks in it. But he said the only person who knew those were safety risks was the person who was building the system. So he said, one very important reason for AI researchers and implementers to be involved is because they’re often the only ones who really know what goes into the code. And I thought that was a very good point.”

Q. Explain what you think of the following principles:

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

“My take was that’s a reasonable assumption. I think there’s quite a spectrum of opinions of when superhuman AI emerges or how broad it will be – will it be of certain capabilities or of all capabilities – so there’s a lot of discussion about that and no consensus. But to assume that we will reach it or to assume that there are some limits on this kind of machine intelligence seems incorrect. So I thought it was completely reasonable to say we should not think this cannot be done, and based on current progress I should say we’ve seen various points in AI where there was very exciting progress and where it did not continue. So I’m not saying that it’s definitely going to happen that the kind of progress we’re seeing in the last 2-3 years will continue without interruption, but we should not assume that it won’t.”

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

“I think I also agreed with that one. I thought the discussions at the meetings were interesting. There are many different risks that society faces – climate change, and other kinds of inequality, and other kinds of risks – so advanced AI is definitely one big issue, and I think some group of people needs to study the problem. Maybe not every person on Earth should be concerned about it, but there should be among scientists a discussion about these issues and a plan – can you build safety guidelines to work with value alignment work? What can you actually do to make sure that the developments are beneficial in the end?”

ARIEL: “So I have a side question. We’re asking about advanced AI, but even if we just continue at our current rate for a couple years, how profound of an effect do you think AI will have on life on Earth?”

BART: “I think the effect will be quite dramatic. This is another interesting point – sometimes AI scientists say, well it might not be advanced AI that will do us in, but dumb AI. B y shifting responsibilities to machines, and in many ways it’s beneficial, but the fear is that these digital systems are being integrated in our society – be it automatic trading, we saw talks about decision making on parole issues, mortgage approvals – all kinds of systems are now being integrated into our society and an interesting point is that they’re actually very limited in some sense. They’re autonomous and they’re somewhat intelligent but actually also quite dumb in many ways.

“For example, they lack common sense, and that’s sort of the big thing. So the example is always the self-driving car has no idea it’s driving you anywhere. It doesn’t even know what driving is. So we look at these systems and we think the car must have some idea of what it is to drive but it actually doesn’t have any idea. It’s a little bit of short term risk. Say the self-driving car comes at you but it doesn’t brake. We may assume, ‘well of course it’s going to brake at the end – it’s just trying to scare me,’ while the car vision may literally not see you or have decided that you are not really there, or something like that. Some of these things that – actually if you looked at the videos of an accident that’s going to happen, people are so surprised that the car doesn’t hit the brakes at all, and that’s because the car works quite differently than humans. So I think there is some short term risk in that there are going to be systems and we actually misjudge them, we actually think they’re smarter than they are. And I think that will actually go away when the machines become smarter, but for now…”

20) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

“Again, a very reasonable principle. And again I sort of refer to some of the discussions between AI scientists who might differ in how big they think that risk is. I’m quite certain it’s not zero, and the impact could be very high. So it’s one of those things that even if these things are still far off and we’re not clear if we’ll ever reach them, even with a small probability of a very high consequence we should be serious about these issues. And again, not everybody, but the subcommunity should.”

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

“I liked that as a principle because I’d like for policymakers to start thinking about these issues. It’s incredibly difficult, I think, because by putting together data from so many different sources – we’ve seen examples of medical data that’s anonymized and handed up. And with Yahoo, I think, data sets were released and researchers were quite convinced that they were anonymized, but in very short time researchers would find ways to identify single individuals based on some obscure pattern of behavior that they could find in the data.

“So this whole question of we should have control of our own data, I like it as a principle. I think what will happen to it, hopefully, is that people will become aware that this is a real issue. And there’s not such a simple solution to it, because it’s really the combining of data sources that gives a lot of power. If Amazon knows all of your shopping behavior it’s probably able to figure out what kind of disease you might have, just to give an example. So this is one of the things we have to find out how to manage in a reasonable way.”

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

“Yes, again I think that’s a very good principle to say explicitly. It’s something that AI researchers have not worried about much before. Stuart Russell is probably the first one who really brought this out to the forefront. I think there’s a general agreement that we have to really pay attention to that issue of Value Alignment. How will we achieve it? Well, there are different approaches and not everybody agrees on that. I think actually we can learn from the ethics and the moral ethics community. Some of these fields where people who have thought for centuries about ethics and moral behavior, will become relevant because these are deep issues, and it’s just great to see that now we’re actually saying let’s try to build systems that have moral principles, even if we don’t know quite what they are.”

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

“So it should be avoided. Again I think this is a little similar to the privacy issue because policymakers should start to think about this issue, and there’s always a difference between it ‘should’ be avoided and ‘can’ it be avoided – that may be a much harder question. But we need to get policymakers aware of it and get the same kinds of discussions as there were around atomic weapons or biological weapons, where people actually start to look at the tradeoffs and the risks of an arms race. That discussion has to be had, and it may actually bring people together in a positive way. Countries could get together and say this is not a good development and we should limit it and avoid it. So to bring it out as a principle, I think the main value there is that we need to have the discussion as a society and with other countries.”

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

“I think that’s a good principle, and again I think Silicon Valley people are already starting to think about this issue. I think economic disparity is accelerating, and I would say more generally that it’s technology that is behind that, and not just AI, but AI is a big factor of it. I think it’s almost a moral principle that we should share benefits among more people in society. I think it’s now down to 8 people who have as much as half of humanity. These are incredible numbers, and of course if you look at that list it’s often technology pioneers that own that half. So we have to go into a mode where we are first educating the people about what’s causing this inequality and acknowledging that technology is part of that cost, and then society has to decide how to proceed. Again it’s more of a political and societal question of how to proceed. From my personal perspective I would like to see the benefits shared. Some people might argue against it but at least I would like to see a discussion about it at a societal level.”

ARIEL: “One thing I’m getting from you is that these principles seem like a good way to start the discussion.”

BART: “Yes, a meeting as we had in the Puerto Rico meeting and the Asilomar meeting, it’s largely among people – academics, technology pioneers – and I think it’s our responsibility to get these issues out to the wider audience and educate the public. It’s not an easy thing to do. During the recent election, technology and unemployment were barely mentioned and people were barely aware of it. The same thing with data privacy issues – people are not quite aware of it.

“There are good reasons why we share data and there are good reasons why we’ll benefit to a large extent from having shared data, but it also is good to have discussions about to what extent people want that and to what extent they want to put limits on some of these capabilities. But it’s something that ultimately, policymakers and the public have to decide.

“As a technology community, we should start making people aware of these issues. I think one of the things that struck me at the meeting is someone gave the example – we may not be far off from generating video of someone saying something. And we see it with the fake news, we can generate text that pretty much sounds like a certain person. We start generating videos that sound like some person. We have to educate people about that. People will start wondering, ‘well is this real or not.’ They have to at least be aware that it could be done.

“So there are some dramatic examples. The other thing I remember is somebody talked about measuring pupil dilation, so basically some neural net learns on physiological responses, and you could detect that with these high-precision cameras. Then you can find out whether somebody is lying or whether somebody likes you when they talk to you, and do the kind of things that now we assume that nobody knows. But that may not be so long anymore, in 5-10 years you might be in an interview and the person on the screen sees exactly what you’re thinking. So these are major changes and we shouldn’t scare everybody, but I think we should tell people and give them some idea of what’s happening and that we have to think about that.”

Q: If all goes well and we’re able to create an advanced, beneficial AI, what will the world look like?

“I think it would be a good world. Automation will replace a lot of work, but work that we might not actually enjoy doing so much. Hopefully we’ll find, and I’m confident we can find other ways for people to feel useful and to have fulfilling lives. But there may be more lives of leisure, of creativity, or arts, or even science. Things that people love doing. So I think if it’s managed well it can be hugely beneficial. I think we already see how our capabilities are extended with smartphones and Google searches and clouds, so people I think in general enjoy the new capabilities that we have. It’s just that the process has to be managed carefully so that people who want to do harm are not taking advantage of it. But overall I can see it working out very nicely, and disease is cured, and all kinds of misery could potentially be eliminated with much better AI, even policymaking – smarter policymaking, smarter decision-making, smarter managing of large numbers of people and systems. So if you think of the upside – there are great upsides. It’s not something we should stop. It’s something we should embrace, but in a well thought out manner.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.org on February 24, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
July 20, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
April 19, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
April 19, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram