Skip to content

Toby Walsh Interview

Published:
January 27, 2017
Author:
Ariel Conn

Contents

The following is an interview with Toby Walsh about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Walsh is Guest Professor at Technical University of Berlin, Professor of Artificial Intelligence at the University of New South Wales, and leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research.

Q: From your perspective, what were the highlights of the conference?

“I think there’s a real community starting to build and some consensus around some of the interesting questions. These are really important topics and it’s good to see that a lot of people, both within and on the fringes of AI, are starting to take these issues seriously. It’s a very pleasant change from when people would say AI would never succeed, and criticized us for trying to attempt to do it. And now it’s quite pleasant to have people say, well what if you succeed, we may have to worry about this. It’s quite pleasant to be on the other side of people’s criticisms.”

Q: Why did you decide to sign the Asilomar AI Principles document?

“I don’t think it’s perfect, but broadly speaking, it’s hard to disagree with many of the principles. And as a scientist I think we have a responsibility for the beneficial outcomes of our research, and many of the principles were ones that we really, as scientists, should be worrying about. I do think, though, that there’s a lot to be said for less is more … I wonder if perhaps we’ve got a little too many principles down. Some of them were principles I don’t disagree with but I think they applied to research in general or to anything that science does, not just AI, and perhaps we should be focusing more on the ones that are particular to AI.”

Q: Why do you think AI researchers should weigh in on issues like those that were brought up at the conference and in the principles document?

“Because as a scientist you have a responsibility, and I think it’s particularly challenging in an area like AI because the technology can always be used for good or for bad. But that’s true of almost all technologies. It’s hard to think of a technology that doesn’t have both good and bad uses. And so that’s particularly true here, and it seems clear that AI is going to have a large impact upon society, and it’s going to happen relatively quickly – certainly when compared to the Industrial Revolution. In the Industrial Revolution we had to build machines, you had to invest money into building large steam engines, and it didn’t scale as quickly as computing technologies do. So it’s likely that this next revolution is going to happen that much quicker than the Industrial Revolution, which took maybe 50 years to start having real impact. This, when it gets going, is going to perhaps take less time than that, potentially.”

ARIEL: “Do you think it’s already gotten going?”

TOBY: “Well, you can already see the shoots of it happening in things like autonomous cars that are less than a decade away. There are dozens of trials starting up this year around the planet. Trials on autonomous taxis, autonomous trucks, autonomous cars. So if you’re a truck driver or a taxi driver you have been given less than a decade’s warning that your job is at severe risk. And so I do think there are going to be interesting consequences, and that’s just in the short-term. In the long-term there are going to be much more profound changes. But we should really start thinking about some of these issues quite soon because there isn’t a lot of time to solve some of them – if you’re a taxi driver or a truck driver, for sure. But society tends to change very slowly, it’s always catching up on technology.”

Q. Explain what you think of the following principles:

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
“I think that’s a very laudable principle. It’s a real fundamental problem facing our society today, which is the increasing inequality and the fact that prosperity is not being shared around. This is fracturing our societies and we see this in many places, in Brexit, in Trump. A lot of dissatisfaction within our societies. So it’s something that we really have to fundamentally address. But again, this doesn’t seem to me something that’s really particular to AI. I think really you could say this about most technologies. Many scientists like myself are funded by the public purse for the public good, and so we have a responsibility to ensure that we work on technologies and we ensure technologies do benefit all. So that’s something that doesn’t seem special about AI, although AI is going to amplify some of these increasing inequalities. If it takes away people’s jobs and only leaves wealth in the hands of those people owning the robots, then that’s going to exacerbate some trends that are already happening.”

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“One reason that I got involved in these discussions is that there are some topics I think are very relevant today, and one of them is the arms race that’s happening amongst militaries around the world already, today. This is going to be very destabilizing. It’s going to upset the current world order when people get their hands on these sorts of technologies. It’s actually stupid AI that they’re going to be fielding in this arms race to begin with and that’s actually quite worrying – that it’s technologies that aren’t going to be able to distinguish between combatants and civilians, and aren’t able to act in accordance with international humanitarian law, and will be used by despots and terrorists and hacked to behave in ways that are completely undesirable. And that’s something that’s happening today. You have to see the recent segment on 60 Minutes to see the terrifying swarms of robot UAVs that the American military is now experimenting with.”

ARIEL: “I talked to Yoshua Bengio a couple days ago, too, and one of the things he commented on about the conference was that our big focus seems to be on value alignment and us creating AI that doesn’t do what we wanted it to do, but he said another concern that he’s realizing is the idea of people misusing AI. What’s your take on that? It seemed like something that connects to what you were saying here with the arms race.”

TOBY: “I’m very worried that people will misuse AI. These technologies can be used for good or for evil, and we literally get to choose which of those it’s going to be. And if we go ahead unchecked, I think it’s clear that there are certain groups and individuals who will use it for evil and not for good. And to go back to his observation that perhaps we’re focusing too much on longer-term things like value alignment, I actually suspect that people will start to realize that value alignment is a problem that we already have today, that we’re already building systems that implicitly or explicitly have values or display values that are not aligned with ours. And we can already see this. The Tay chatbot didn’t share our community values about racism or sexism or misogyny, or the freedom of speech that we would like to have. The Compass program that is discriminating in its recommendations for sentencing in 20 of the 52 states in the US, doesn’t share our values about racism – it discriminates against black people. One of the impressions I came away from the conference with is that people think issues like value alignment are long-term research issues. I think they’re not just short-term research issues, they’re problems that already plague us. We don’t have solutions for them already. And they’re already impacting upon people’s lives. And that’s a value alignment problem we face today. It’s not a problem for superintelligent systems; it’s a problem for stupid AI systems that we build today. And, of course, if we build superintelligent AI systems we’ve only amplified the challenges, but I think we have many of those challenges today. But actually I think that’s a hopeful observation, because if we can solve it for these simple systems, for narrow-focused domains, for narrow-focused values, then hopefully the sorts of solutions we come up with will be things that will be components of value alignment for more intelligent systems in the future. Hopefully we’ll get our training wheels on the primitive AI systems we have today and will have tomorrow to help us solve the problem for much more intelligent systems decades or centuries away from today.”

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“First of all, I don’t know why we have the word highly. I think any autonomous system, even a lowly autonomous system, should be aligned with human values. I’d wordsmith away the ‘high’. Other than that, I think that we have to worry about enforcing that principle today. As I’ve said, I think that will be helpful in solving the more challenging value alignment problem as systems get more sophisticated.”

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“Yes, and again this is one of those principles where I think you could put any society-changing technology in place of advanced AI. So this is not a principle that’s true of AI, it’s true of any groundbreaking technology. It would be true of the steam engine, in some sense it’s true of social media and we’ve failed at that one, it could be true of the Internet but we failed at planning that well. It could be true of fire too, but we failed on that one as well and used it for war. But to get back to the observation that some of them are things that are not particular to AI – once you realize that AI is going to be groundbreaking, then all of the things that should apply to any groundbreaking technology should apply.”

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
“Yes, this is a great one, and actually I’m really surprised how little discussion we have around AI and privacy. I thought there was going to be much more fallout from Snowden and some of the revelations that happened, and AI, of course, is enabling technology. If you’re collecting all of this data, the only way to make sense of it is to use AI, so I’ve been surprised that there hasn’t been more discussion and more concern amongst the public around these sorts of issues.”

Q: If all goes well and we’re able to create an advanced, beneficial AI, what will the world look like?

“I always say to people, OK, there are all of these risks and challenges that we have to solve, but also, the technology is about our only hope to solve not only these risks but all of the problems facing the planet like global warming, global financial crisis, the global terrorism problem, the global refugee problem, all of these problems. If our children are going to have better lives than ours, this is about the only technology in play to solve them.
“If we follow the good path, the world would be a lot happier. But we get to choose. There are good paths and bad paths to be followed, and we literally get to choose the path we get to follow. And it doesn’t even require us to get to artificial general intelligence. Even if we continue to only be able to build focused, specialized AIs, again it’s the only hope to solve the problems facing society. The last seventy-five years of economic growth have come from information technology. There are also a few other technologies like biotech and nanotech in play today. A large part of our economic prosperity is going to come from IT. The world is getting more digital, and so if our children are going to live as good lives as ours, it’s going to come from technology, and it’s going to largely come from IT. The immense benefits, the fact that most of us in the first world do live such comfortable lives, and the fact that many people in the third world are being lifted out of poverty – sometimes we forget all of this, but the third world is also getting better- despite the fact inequality is increasing, but technology has brought that and a lot of that is from IT. So it’s our only hope for the future.”

Q: Is there anything else that you wanted to add that you think is important?

“I guess I wanted to say one more thing about the principles, about the idea that there are perhaps too many principles. One reason why I prefer to have fewer principles is because if you can have a few simple, very general principles, then just like the Founding Fathers couldn’t foresee the future and all of the challenges that face the US going forward, but by having some fundamental general principles you have more hope that they’re still going to be applicable in 50, 100, 200 years time. And so I just worry that some of them are a little too specific to the things that we can see in front of us today, and that there will be other issues that come up.
“Of course, it’s just a start. And to remember the historical analogy even with the US Constitution, there have been a number of amendments made to it, and amendments made to the amendments. It’s always a work in progress.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.org on January 27, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
20 July, 2017

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
20 July, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
19 April, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
19 April, 2017
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram