Skip to content

Susan Craw Interview

Published:
July 20, 2017
Author:
Ariel Conn

Contents

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a Research Professor at Robert Gordon University Aberdeen in Scotland. Her research in artificial intelligence develops innovative data/text/web mining technologies to discover knowledge to embed in case-based reasoning systems, recommender systems, and other intelligent information systems.

Q. Explain what you think of the following principles:

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

“Yes, I agree with that. I think it’s a little strange the way it’s worded, because of ‘undirected.’ It might even be better the other way around, which is, it would be better to create beneficial research, because that’s a more well-defined thing. But yes, certainly many researchers are working on things that they believe are going to do good. For example, making companies more efficient, or helping people engage with others and engage with information, or just assisting them in some way finding information that they’d like to be able to access.”

4) Research Culture: A culture of cooperation, trust and transparency should be fostered among researchers and developers of AI.

“That would be a lovely principle to have, it can work perhaps better in universities, where there is not the same idea of competitive advantage as in industry. So, I suppose I can unpack that and say: transparency I’m very much in favor of, because I think it’s really important that AI systems are able to explain what they’re doing and are able to be inspected as to why they’ve come up with a particular solution or recommendation.

“And cooperation and trust among researchers… well without cooperation none of us would get anywhere, because we don’t do things in isolation. And so I suspect this idea of research culture isn’t just true of AI. You’d like it to be true of many subjects that people study. Trusting that people have good governance of their research, and what they say they’ve done is a true reflection of what they are actually working on and have achieved.”

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.

“It’s quite hard to cooperate, especially if you’re trying to race for the product, and I think it’s going to be quite difficult to police that, except, I suppose, by people accepting the principle. For me, safety standards are paramount and so active cooperation to avoid corner cutting in this area is even more important. But that will really depend on who’s in this space with you.”

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

“I suppose this one is more perhaps thinking of robots. You don’t want your robots running amok as they learn more or become devious, or something. But I guess it’s true of AI systems as well, software systems. And it is linked to ‘transparency.’ Maybe ‘verifiably so’ would be possible with systems if they were a bit more transparent about how they were doing things.”

14) Shared Benefit: AI technology should benefit and empower as many people as possible.

“That’s definitely a yes. But it is AI technologies plural, when it’s taken as a whole. Rather than saying that a particular technology should benefit lots of people, it’s that the different technologies should benefit and empower people.”

ARIEL: “So, in general, as long as we have this broad range of AI technology and it’s benefitting people, whether one or two individual technologies benefit everyone is less important? Is that how you’re viewing that?”

SUSAN: “Yes, because, after all, AI technologies can benefit you in your work because it makes you more efficient or less likely to make a mistake. And then there’s all the social AI technologies, where you are being helped to do things, and have social engagement with others and with information – both regulatory information but also social information.”

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems to accomplish human-chosen objectives.

“I think this is hugely important, because otherwise you’ll have systems wanting to do things for you that you don’t necessarily want them to do, or situations where you don’t agree with the way that systems are doing something.

“In a meeting my research group had recently, I was shown the new Google Inbox for your Gmail, that is designed to organize your emails for you. And I’m not sure it’s something that I shall be adopting. But on the other hand, when you are using Google Maps and you find it knows about the hotel that you’re staying in for a trip, then that’s convenient, although it’s kind of scary that it’s able to work out all those things for you. It knows much more about you than you would perhaps believe. Instead of not knowing what the system can do, I’d much prefer to set things up and say, ‘I want you to do this for me.’”

ARIEL: “And then how do you feel about situations where safety is an issue? Even today, we have issues of pilots not necessarily flying the planes as well as the automation in the planes can. For me, in that case, it seems better to let the automated system take over from the human if necessary. But most of the time that’s not what I would want. Where do you draw the line?”

SUSAN: “With that one you would almost want it to be the other way around: by default, the automated system is in control, but the pilot could take over if necessary. And the same is true of autonomous cars. I was hearing something on the radio this morning in the UK, which was saying that if you keep the driver there, then the last thing that he wants to happen is that the automated system can’t cope with something, because he’s not likely to be paying that much attention to what the problem is.”

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

“Yes, the idea of ‘shared ethical ideals’. It’s the rogue state developing superintelligence, and whether that could be picked up as unethical behavior. It would then turn AI into something more like nuclear power – it could be a reason for taking action against a state or an organization.”

ARIEL: “Do you worry about that happening, or do you think, for the most part, if we develop superintelligence it will be safe?”

SUSAN: “I suppose I do worry about it, because rogue nuclear states do exist. So it is possible, but I think it needs a lot of collaboration and sharing to develop superintelligence. There could be surveillance of what’s happening in countries, and the development of things of that level could be ascertained. If you had a superintelligence arms race in the same way as the nuclear arms race, then the countries and organizations that are acting well would be able to keep up, because there are more of them and there’s more ability to share. would be my hope.”

ARIEL: “Is there anything else you want to add about the principles, or is there anything we didn’t cover that you think is important that you wanted to comment on?”

SUSAN: “Yes, I read in someone’s commentary on this, and that was: was it wise to have so many principles? I think I might agree with that, but on the other hand, it seems that they may be quite detailed principles, and as such you need to have many of them. And as our discussion has shown, there are different interpretations for the principles depending on your interests and what you associate with certain words.”

ARIEL: “That’s one of the big reasons we wanted to do these interviews and try to get a lot of people talking about their interpretations. We view these principles as a starting point.”

SUSAN: “I actually have taken a big interest in this because I was at IJCAI in 2015 in Buenos Aires, where a lot of the discussions outside the actual talks were on this particular topic. And there was a panel because it was just when Elon Musk and various people had talked about the existential threat from AI. So it was very lovely to see the AI community jumping into action, and saying, ‘we haven’t made our voice heard enough, and we haven’t really talked about this, and we certainly haven’t talked about this in a way that people outside our close community can hear us.’ So this is just another way of promoting these ideas, and I think this is hugely important, because I don’t think the AI community particularly publicizes its faults on these issues.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.org on July 20, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
20 July, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
19 April, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
19 April, 2017

Patrick Lin Interview

The following is an interview with Patrick Lin about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. […]
13 April, 2017
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram