Skip to content

Susan Schneider Interview

Published:
April 19, 2017
Author:
Ariel Conn

Contents

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a philosopher and cognitive scientist at the University of Connecticut, YHouse (NY) and the Institute for Advanced Study in Princeton, NJ.

Q. Explain what you think of the following principles:

4) Research Culture: A culture of cooperation, trust and transparency should be fostered among researchers and developers of AI.

“This is a nice ideal, but unfortunately there may be organizations, including governments, that don’t follow principles of transparency and cooperation. Still, it is important that we set forth the guidelines, and aim to set norms that others feel they need to follow.”

ARIEL: “And do you have thoughts on trying to get people more involved, who might resist something like that culture?”

SUSAN: “Concerning those who might resist the cultural norm of cooperation and transparency, in the domestic case, regulatory agencies may be useful. The international case is difficult; I am most concerned about the use of AI-based autonomous weapons. I’m also greatly concerned with the possible strategic use of superintelligent AI for warfare, even if they aren’t technically “autonomous weapons.” Global bans are difficult to enforce, and I doubt there is even sufficient support in the US government for a ban (or even major restrictions) on AI-based autonomous weapons, for instance. And secrecy is often key to the success of weapons and strategic programs. But you asked how to best involve humans: Ironically, when it comes to superintelligence, enhancing certain humans so that they can understand (and compete with) the complex processing of a superintelligence might be the most useful way of getting humans involved! But I think the efforts of FLI, in publicizing the Asilomar Principles and holding meetings, are potentially very useful. Calling attention to AI safety is very important.”

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.

“Cooperation is very important. The problem is going to be countries or corporations that have a stake in secrecy. To go back to the case of warfare, I worry that we are entering an AI arms race. If superintelligent AI is the result of this race, it could pose an existential risk to humanity. As Elon Musk, Bill Gates, Stephen Hawking, Nick Bostrom and others have pointed out, it would be difficult to control an AI that is smarter than humans in every respect. I’m not sure the safeguards being discussed will work (kill switches, boxing it in, and so on), although they are better than nothing.”

ARIEL: “I think it’s natural to think about the weapons as an obvious issue, but I also worry about just economic forces encouraging companies to make a profit. And the best way to make a profit is to be the first person to create the new product. Do you have concerns about that at all? Or do you think it’s more weapons that we have an issue with?”

SUSAN: “This is a major concern as well. We certainly don’t want companies cutting corners and not developing safety standards. In the US, we have the FDA regulating pharmaceutical drugs before they go to consumers. We need regulatory practices for putting AIs into the market. If the product is a brain enhancement device, for instance, what protects the privacy of one’s thoughts, or firewalls the brain from a computer virus? Certain regulatory tasks may fall under agencies that now deal with the safety of medical devices. Hacking will take on a whole new dimension – brain hacking! (This is very different than the fun consciousness hacking going on in Silicon Valley right now!) And what about the potential abuse of the robots that serve us? After all, could a machine be conscious? (I pursue this issue in a recent Nautilus piece and a TED Talk. If they can be conscious, they aren’t mere products that can cause harm; they are sentient beings that can be harmed.”

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

“This is important, obviously, but I’m not sure how you verify that you can trust the program when a machine is superintelligent. It can constantly rewrite its own code, and we may not even be able to understand the program to begin with. It is already difficult to fully understand what deep learning programs are doing, and these are nowhere near AGI even.

“So ‘value alignment’ – the idea of how to align machine values with our values, basically – it’s horrendously tricky. I think the market forces are going to keep pushing us more and more in the direction of greater intelligence for machines, and as you do that, I think it becomes more difficult to control them.

“I’ve already talked about superintelligence; there’s general agreement that value alignment is a major problem. So consider a different case, the example of the Japanese androids that are being developed for elder care right now. Right now, they’re not smart; right now, the emphasis is on physical appearance and motor skills. But imagine when one of these androids is actually engaged in elder care, and it’s trying to do ordinary tasks, there will be a need for the android to be highly intelligent. It can’t just have expertise in one domain, like Go or chess, it has to multitask and exhibit cognitive flexibility. (As my dissertation advisor, Jerry Fodor, used to say, it has to make breakfast without burning the house down.) After all, we do not want elderly people facing accidents! That raises the demand for household assistants that are AGIs. And once you get to the level of artificial general intelligence, it’s harder to control the machines. We can’t even make sure fellow humans (other kinds of AGI’s, haha) have the right goals; why should we think AGI will have values that align with ours, let alone that a superintelligence would. We are lucky when are teenagers can be controlled!”

ARIEL: “Do you worry at all about designs pre-AGI? Or are you mostly concerned that things will get worse once we hit human-level and then beyond?”

SUSAN: “No, AI introduces all sorts of safety issues. I mean, people worry about drones that aren’t very smart, but could do a lot of damage, for example. So there are all sorts of worries. My work concerns AGI and superintelligence, so I tend to focus more on that.”

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with the obligation and responsibility to shape those implications.

“I don’t have objections to these principles. The problem is going to be getting people to internalize them. It’s important to have principles put forward! The difficult task is to make sure people aren’t focused on their own self-interest and that they anticipate the social impact of their research.”

ARIEL: “Maybe the question here is, how do you see them being implemented? Or what problems do you see us having if we try to implement them?”

SUSAN: “Well I guess the problem I would have is seeing how they would be implemented. So the challenge for the Institute is to figure out ways to make the case that people follow them. And not just in one community, not just in Silicon Valley or in North America, but everywhere. Even in isolated countries that get AI technology and could be uncooperative with the United States, for example an authoritarian dictatorship. I just haven’t the slightest idea about how you would go about implementing these. I mean, a global ban on AI or even AGI technology is not practical at this point.”

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems to accomplish human-chosen objectives.

“Right, that’s very, very tricky. Even now it’s sometimes difficult to understand why a deep learning system made the decisions that it did. And obviously, getting the machine to communicate clearly to us is an ongoing research project, and there will be exciting developments in that. But again, my focus is on AGI and superintelligence. If we delegate decisions to a system that’s vastly smarter than us, I don’t know how we’ll be able to trust it, since traditional methods of verification seem break down. One idea would be that you could have an enhanced human that would interact with the machine. (Hopefully you can trust the enhanced human! The hope is that even if he/she is post-biological, the person still has a partly biological brain, could even unplug the enhancements, and they identify with us.”)

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than for one state or organization.

“The obvious problem here is going to be, what ‘widely-shared ethical ideals’ are. And how you would even specify that in the machine’s architecture – that’s a big issue right now. That was a topic that was treated very nicely by Nick Bostrom’s book, Superintelligence. And there’s no easy answer. The field of ethics is full of controversies, and that doesn’t even get to the larger problem of how you would encode ethics in a machine and make sure that the machine continues with those ideals in mind.“

ARIEL: “If we’re looking back at history, do you feel that humanity as a whole is moving towards more cohesive ideals, or do you think it’s just as fractured as ever?”

SUSAN: “Our world is still a fractured one, but to speak to the case of AI, what you want are values that people agree upon, and if you add too much content you’re never going to get shared ideals. You start getting into values that differ from culture to culture and religion to religion… To draw from Asimov’s three laws, we might use a ‘do not harm humans’ principle. But there may be contexts in which harming is justifiable or inevitable. And this is when you get the issues about which particular ethical system is justified. So according to a utilitarian principle, for example, it would be okay to sacrifice one individual for the greater good. But that wouldn’t be okay according to a Kantian approach, at least without the person’s consent.

“So it’s going to be very tricky. I mean, if you consider all the business interests at work, I just wonder how you would even make sure that the different businesses developing a certain kind of AI or would code in the very same ideals. (Or perhaps the same ideals do not need to be coded, but the consumer needs to be aware what ideals the system is supposed to follow.) It would be nice to see some safety regulations here. But then, again, the problem becomes renegade countries and renegade groups that don’t follow it.”

ARIEL: “Yeah, war is a perfect example of when you don’t want to apply the ‘robots shouldn’t hurt a human’ rule.”

SUSAN: “Exactly. The trick there is how you can get programming that you can trust, programming that is going to give us the right results – a machine that’s safe.

“I find it very difficult to see how we could ensure superintelligence safety without benefitting from human minds that have been enhanced, so that we’re working with a group of ultra-intelligent people who can help us better foresee how a superintelligence could behave. Just as no one person understands all of mathematics, so too, collective efforts of enhanced and unenhanced individuals may help us get a grip on a single superintelligent mind. As I’ve urged in a recent paper, if we develop ways to understand superintelligence we can better control it. It will probably be easier to understand superintelligences that are based on principles that characterize the human brain (combinatorial representations, multi-layered neural networks, etc.). We can draw from work in cognitive science, or so I’ve urged.”

ARIEL: “Wouldn’t we have similar issues with a highly intelligent augmented person?”

SUSAN: “We might. I mean, I think if you take a human brain that is a relatively known quantity, and then you begin to augment it in certain ways, you may be better able to trust that individual. Because so much of the architecture is common to us, and the person was a human, cares about how humans fare, and so on. But you never know. So this gets into the domain of science fiction.

“And, you know, the thing that I’ve been working on is machine consciousness. One thing that’s not discussed in these principles about safety and about ethics is what we want to do concerning developing machines that might have an inner world or feel. There’s a big question there about whether a machine could feel, be conscious. Suppose it was an AGI, would it inevitably feel like something to be it?

“A lot of cognitive scientists think that experience itself is just computation, so if that’s the case, then one element of safety is that the machine may have inner experience. And if they recognize in us the capacity to feel, they may respect us more. On the other hand, if the machine is conscious, maybe it will be less predictable. So these are issues we have to enter into the whole AI safety debate as well, in addition to work on human intelligence enhancement as a safety strategy.”

ARIEL: “Overall, what were your thoughts about the principles? And you were talking about consciousness… Are there other principles you’d like to see added that aren’t there?”

SUSAN: “Well, I think developing an understanding of machine consciousness should be a goal. If the machine is conscious, it could facilitate empathy, or it could make it less productive. So there are two issues here with AI consciousness that are certainly important. The first issue is, you can’t market a product if it’s conscious, because that could be tantamount to slavery. And the other issue is, if these AGIs or superintelligences are in a condition where they have weapons or are capable of hurting others, and we want them to have our goals, then we need to figure out if they’re conscious or not, because it could play out one of two ways. Consciousness could make something more compassionate towards other conscious beings, the way it is for non-human animals. Certain humans choose to not eat non-human animals because they feel that they’re conscious. Or the presence of conscious experience could make the AI less predictable. So we need to figure this out. I really enjoy the company and discussions of the AI leaders, but this really has a lot to do with philosophical training as well. So there should be a dialogue on this issue and more understanding.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.org on April 19, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
July 20, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
April 19, 2017

Patrick Lin Interview

The following is an interview with Patrick Lin about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. […]
April 13, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram