Skip to content

Dan Weld Interview

Published:
January 29, 2017
Author:
Ariel Conn

Contents

The following is an interview with Dan Weld about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Weld is Professor of Computer Science & Engineering and Entrepreneurial Faculty Fellow at the University of Washington.

Q: From your perspective what were the highlights of the conference?

“One of the highlights was having the chance to interact with such a diverse group of people, including economists and lawyers as well as the technical folks. I also really enjoyed Yann LeCun’s talk, because I hadn’t previously heard his vision for taking a deep neural-network architecture and extending it to learn full agent capabilities.”

Q: Why did you choose to sign the AI principles that emerged from discussions at the conference?

“To be honest, I was a little bit torn, because I had concerns about the wording of many of the principles. Furthermore, some of the proposed principles seemed much more important than others. As a result, the current set was a bit unsatisfying, since it doesn’t reflect the prioritization that I think is important. Overall, however, I thought that the spirit underlying the set of principles was right-on, and it was important to move forward with them – even if imperfect.
“One other comment – I should note that many of the principles hold, in my opinion, even if you replace ‘artificial intelligence’ with any other advanced technology – biotech or big data or pesticides or anything, really. And so specifying “AI” implicitly suggests that the concern is much bigger for AI than it is for these other technologies. However, for many of the principles I don’t think that that’s true.”

Q: Why do you think that AI researchers should weigh in on such issues as opposed to simply doing technical work?

“I think that this type of advocacy is important for any scientist who’s working on any technology, not just for AI researchers. Since we understand the technology better than many other people, we have one important perspective to bring to the table. And so it’s incumbent on us to take that seriously.”

Q: Explain what you think of the following principles:

Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
“AI is having incredible successes and becoming widely deployed. But this success also leads to a big challenge – its impending potential to increase productivity to the point where many people may lose their jobs. As a result, AI is likely to dramatically increase income disparity, perhaps more so than other technologies that have come about recently. If a significant percentage of the populace loses employment, that’s going to create severe problems, right? We need to be thinking about ways to cope with these issues, very seriously and soon. I actually wrote an editorial about this issue.”

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“I fervently hope we don’t see an arms race in lethal autonomous weapons. That said, this principle bothered me, because it doesn’t seem to have any operational form. Specifically, an arms race is a dynamic phenomenon that happens when you’ve got multiple agents interacting. It takes two people to race. So whose fault is it if there is a race? I’m worried that both participants will point a finger at the other and say, “Hey, I’m not racing! Let’s not have a race, but I’m going to make my weapons more accurate and we can avoid a race if you just relax.” So what force does the principle have? That said, I think any kind of arms race is dangerous, whether or not AI is involved.”

Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“I agree! As a scientist, I’m against making strong or unjustified assumptions about anything, so of course I agree. Yet this principle bothers me a bit, because it seems to be implicitly saying that there is an immediate danger that AI is going to become superhumanly, generally intelligent very soon, and we need to worry about this issue. This assertion, which was held by a number of people at the workshop, concerns me because I think it’s a distraction from what are likely to be much bigger, more important, more near term, potentially devastating problems. I’m much more worried about job loss and the need for some kind of guaranteed healthcare, education and basic income than I am about Skynet. And I’m much more worried about some terrorist taking an AI system and trying to program it to kill all Americans than I am about an AI system suddenly waking up and deciding that it should do that on its own.”

ARIEL: “That was along the lines of something Yoshua Bengio talked about, which was that a lot of the conference ended up focusing on how AI design could go wrong and that we need to watch out for that, but he’s also worried about misuse, which sounds like that would be the terrorist stuff that you’re worried about.”

DAN: “Yes, that’s another way of saying it, and I think that’s a much bigger, more immediate concern. I have really nothing against what Nick and Stuart are talking about, I just think other problems are much more urgent.”

Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
“How could I disagree? Should we ignore the risks of any technology and not take precautions? Of course not. So I’m happy to endorse this one. But it did make me uneasy, because there is again an implicit premise that AI systems have a significant probability of posing an existential risk. I don’t believe that this is something that we should be spending very much time worrying about. Why not? Because they distract us from more important problems such as employment (which I mentioned above).
“Can I insert something here? There is a point that I wanted to make during the workshop but never really found the right time. At the workshop there was a lot of discussion about superhuman-level, artificial general intelligence, but I think what’s going to happen is – long before we get superhuman AGI – we’re going to get superhuman artificial *specific* intelligence. Indeed, we already have! Computers multiply numbers much faster than people can. They play Chess, Jeopardy and Go much better than people can. Computers detect credit card fraud much better than people can. So there’s an increasing number of things that AI computers can do better than people. It probably won’t be very long until computers can drive better than most people as well. Radiology and many types of medical diagnosis may come soon as well.
“These narrower kinds of intelligence are going to be at the superhuman level long before a *general* intelligence is developed, and there are many challenges that accompany these more narrowly described intelligences, even ignoring the fact that maybe someday in the distant future AI systems will be able to build other AI systems. So one thing that I thought was missing from the conference was a discussion more about these nearer-term risks.
“One technology, for example, that I wish had been discussed more is explainable machine learning. Since machine learning is at the core of pretty much every AI success story, it’s really important for us to be able to understand *what* it is that the machine learned. And, of course, with deep neural networks it is notoriously difficult to understand what they learned. I think it’s really important for us to develop techniques so machines can explain what they learned so humans can validate that understanding. For example, an explanation capability will be essential to ensure that a robot has correctly induced our utility function, before it can be trusted with minimal supervision. Of course, we’ll need explanations before we can trust an AGI, but we’ll need it long before we achieve general intelligence, as we deploy much more limited intelligent systems. For example, if a medical expert system recommends a treatment, we want to be able to ask ‘Why?'”

ARIEL: “So do you think that’s something that is technically possible? That’s something that I’ve heard other people comment on – when we bring up issues of transparency or explainability they say, well this won’t be technically possible.”

DAN: “I think it’s still an open question the degree to which we can make our systems explainable and transparent, but I do think it’s possible. In general, any explanation is built atop a number of simplifying assumptions. That’s what makes it comprehensible. And there’s a tricky judgement question about what simplifying assumptions are OK for me to make when I’m trying to explain something to you. Different audiences want different levels of detail, and the listener’s objectives and interests also affect whether an explanation is appropriate. Furthermore, an explanation shouldn’t be one-shot; the AI system needs to answer follow-up questions as well. So there are many challenges there, and that’s why I had hoped that they would get more attention at the workshop.”

ARIEL: “One of the things we tend to focus on at FLI is the idea of existential risks. Do you foresee the possibility that some of these superhuman narrow AI could also become existential risks?”

DAN: “I’m less concerned by existential risks than by catastrophic risks. And narrow AI systems, foolishly deployed, could be catastrophic. I think the immediate risk is less a function of the intelligence of the system than it is about the system’s autonomy, specifically the power of its effectors and the type of constraints on its behavior. Knight Capital’s automated trading system is much less intelligent than Google Deepmind’s AlphaGo, but the former lost $440 million in just forty-five minutes. AlphaGo hasn’t and can’t hurt anyone. If we deploy autonomous systems with powerful effectors we had better have constraints on their behavior regardless of their intelligence. But specifying these constraints is extremely hard, leading to deep questions about utility alignment. I think we need to solve challenges well before we have AGI. In summary, I think that these issues are important even short of the existential risk. … And don’t get me wrong – I think it’s important to have some people thinking about problems surrounding AGI; I applaud supporting that research. But I do worry that it distracts us from some other situations which seem like they’re going to hit us much sooner and potentially cause calamitous harm.
“That said, besides being a challenge in their own right, these superhuman narrow AI systems can be a significant counterbalance to any AGI that gets introduced. See for example, the proposals made by Oren Etzioni about AI guardians – AI systems to monitor other AI systems.”

Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
“I support that principle very strongly! I’m really quite worried about the loss of privacy. The number of sensors is increasing and combined with advanced machine learning, there are few limits to what companies and governments can learn about us. Now is the time to insist on the ability to control our own data.”

Q: Assuming all goes well, what do you think a world with advanced beneficial AI would look like? What are you striving for with your AI work?

“It’s tricky predicting the future, but there are myriad ways that AI can improve our lives. In the near term I see greater prosperity and reduced mortality due to things like highway accidents and medical errors, where there’s a huge loss of life today.
“In the longer term, I’m excited to create machines that can do the work that is dangerous or that people don’t find fulfilling. This should lower the costs of all services and let people be happier… by doing the things that humans do best – most of which involve social and interpersonal interaction. By automating rote work, people can focus on creative and community-oriented activities. Artificial Intelligence and robotics should provide enough prosperity for everyone to live comfortably – as long as we find a way to distribute the resulting wealth equitably.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.org on January 29, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
20 July, 2017

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
20 July, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
19 April, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
19 April, 2017
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram