Skip to content

Can AI Remain Safe as Companies Race to Develop It?

Published:
August 4, 2017
Author:
Ariel Conn

Contents

Click here to see this page in other languages: Chinese   Russian 

Race Avoidance Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.

Artificial intelligence could bestow incredible benefits on society, from faster, more accurate medical diagnoses to more sustainable management of energy resources, and so much more. But in today’s economy, the first to achieve a technological breakthrough are the winners, and the teams that develop AI technologies first will reap the benefits of money, prestige, and market power. With the stakes so high, AI builders have plenty of incentive to race to be first.

When an organization is racing to be the first to develop a product, adherence to safety standards can grow lax. So it’s increasingly important for researchers and developers to remember that, as great as AI could be, it also comes with risks, from unintended bias and discrimination to potential accidental catastrophe. These risks will be exacerbated if teams struggling to develop some product or feature first don’t take the time to properly vet and assess every aspect of their programs and designs.

Yet, though the risk of an AI race is tremendous, companies can’t survive if they don’t compete.

As Elon Musk said recently, “You have companies that are racing – they kind of have to race – to build AI or they’re going to be made uncompetitive. If your competitor is racing toward AI and you don’t, they will crush you.”

 

Is Cooperation Possible?

With signs that an AI race may already be underway, some are worried that cooperation will be hard to achieve.

“It’s quite hard to cooperate,” said AI professor Susan Craw, “especially if you’re trying to race for the product, and I think it’s going to be quite difficult to police that, except, I suppose, by people accepting the principle. For me safety standards are paramount and so active cooperation to avoid corner cutting in this area is even more important. But that will really depend on who’s in this space with you.”

Susan Schneider, a philosopher focusing on advanced AI, added, “Cooperation is very important. The problem is going to be countries or corporations that have a stake in secrecy. … If superintelligent AI is the result of this race, it could pose an existential risk to humanity.”

However, just because something is difficult, that doesn’t mean it’s impossible, and AI philosopher Patrick Lin may offer a glimmer of hope.

“I would lump race avoidance into the research culture. … Competition is good, and an arms race is bad, but how do you get people to cooperate to avoid an arms race? Well, you’ve got to develop the culture first,” Lin suggests, referring to a comment he made in our previous piece on the Research Culture Principle. Lin argued that the AI community lacks cohesion because researchers come from so many different fields.

Developing a cohesive culture is no simple task, but it’s not an insurmountable challenge.

 

Who Matters Most?

Perhaps an important step toward developing an environment that encourages “cooperative competition” is understanding why an organization or a team might risk cutting corners on safety. This is precisely what Harvard psychologist Joshua Greene did as he considered the Principle.

“Cutting corners on safety is essentially saying, ‘My private good takes precedence over the public good,’” Greene said. “Cutting corners on safety is really just an act of selfishness. The only reason to race forward at the expense of safety is if you think that the benefits of racing disproportionately go to you. It’s increasing the probability that people in general will be harmed, a common bad, if you like, in order to raise the probability of a private good.”

 

A Profitable Benefit of Safety

John Havens, Executive Director with the IEEE, says he “couldn’t agree more” with the Principle. He wants to use this as an opportunity to “re-invent” what we mean by safety and how we approach safety standards.

Havens explained, “We have to help people re-imagine what safety standards mean. … By going over safety, you’re now asking: What is my AI system? How will it interact with end users or stakeholders in the supply chain touching it and coming into contact with it, where there are humans involved, where it’s system to human vs. system to system?

“Safety is really about asking about people’s values. It’s not just physical safety, it’s also: What about their personal data, what about how they’re going to interact with this? So the reason you don’t want to cut corners is you’re also cutting innovation. You’re cutting the chance to provide a better product or service.”

But for companies who take these standards seriously, he added, “You’re going to discover all these wonderful ways to build more trust with what you’re doing when you take the time you need to go over those standards.”

 

What Do You Think?

With organizations like the Partnership on AI, we’re already starting to see signs that companies recognize and want to address the dangers of an AI race. But for now, the Partnership is comprised mainly of western organizations, while companies in many countries and especially China are vying to catch up to — and perhaps “beat” — companies in the U.S. and Europe. How can we encourage organizations and research teams worldwide to cooperate and develop safety standards together? How can we help teams to monitor their work and ensure proper safety procedures are always in place? AI research teams will need the feedback and insight of other teams to ensure that they don’t overlook potential risks, but how will this collaboration work without forcing companies to reveal trade secrets? What do you think of the Race Avoidance Principle?

This article is part of a series on the 23 Asilomar AI Principles. The Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the discussions about previous principles here.

This content was first published at futureoflife.org on August 4, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , ,

If you enjoyed this content, you also might also be interested in:

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram