Skip to content

Preparing for the Biggest Change in Human History

Published:
February 24, 2017
Author:
Ariel Conn

Contents

Click here to see this page in other languages: Chinese  

Importance Principle: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

In the history of human progress, a few events have stood out as especially revolutionary: the intentional use of fire, the invention of agriculture, the industrial revolution, possibly the invention of computers and the Internet. But many anticipate that the creation of advanced artificial intelligence will tower over these achievements.

In a popular post, Tim Urban with Wait But Why wrote that artificial intelligence is “by far THE most important topic for our future.

Or, as AI professor Roman Yampolskiy told me, “Design of human-level AI will be the most impactful event in the history of humankind. It is impossible to over-prepare for it.”

The Importance Principle encourages us to plan for what could be the greatest “change in the history of life.” But just what are we preparing for? What will more advanced AI mean for society? I turned to some of the top experts in the field of AI to consider these questions.

Societal Benefits?

Guruduth Banavar, the Vice President of IBM Research, is hopeful that as AI advances, it will help humanity advance as well. In favor of the principle, he said, “I strongly believe this. I think this goes back to evolution. From the evolutionary point of view, humans have reached their current level of power and control over the world because of intelligence. … AI is augmented intelligence – it’s a combination of humans and AI working together. And this will produce a more productive and realistic future than autonomous AI, which is too far out. In the foreseeable future, augmented AI – AI working with people – will transform life on the planet. It will help us solve the big problems like those related to the environment, health, and education.”

“I think I also agreed with that one,” said Bart Selman, a professor at Cornell University. “Maybe not every person on earth should be concerned about it, but there should be, among scientists, a discussion about these issues and a plan – can you build safety guidelines to work with value alignment work? What can you actually do to make sure that the developments are beneficial in the end?”

Anca Dragan, an assistant professor at UC Berkeley, explained, “Ultimately, we work on AI because we believe it can have a strong positive impact on the world. But the more capable the technology becomes, the easier it becomes to misuse it – or perhaps, the effects of misusing it become more drastic. That is why it is so important, as we make progress, to start thinking more strongly about what role AI will play.”

Short-term Concerns

Though the Importance Principle specifically mentions advanced AI, some of the researchers I interviewed pointed out that nearer-term artificial intelligence could also drastically impact humanity.

“I believe that AI will create profound change even before it is ‘advanced’ and thus we need to plan and manage growth of the technology,” explained Kay Firth-Butterfield, Executive Director of AI-Austin.org. “As humans, we are not good at long-term planning because our civil systems don’t encourage it, however, this is an area in which we must develop our abilities to ensure a responsible and beneficial partnership between man and machine.”

Stefano Ermon, an assistant professor at Stanford University, also considered the impacts of less advanced AI, saying, “It’s an incredibly powerful technology. I think it’s even hard to imagine what one could do if we are able to develop a strong AI, but even before that, well before that, the capabilities are really huge. We’ve seen the kind of computers and information technologies we have today, the way they’ve revolutionized our society, our economy, our everyday lives. And my guess is that AI technologies would have the potential to be even more impactful and even more revolutionary on our lives. And so I think it’s going to be a big change and it’s worth thinking very carefully about, although it’s hard to plan for it.”

In a follow up question about planning for AI over the shorter term, Selman added, “I think the effect will be quite dramatic. This is another interesting point – sometimes AI scientists say, well it might not be advanced AI will do us in, but dumb AI. … The example is always the self-driving car has no idea it’s driving you anywhere. It doesn’t even know what driving is. … If you looked at the videos of an accident that’s going to happen, people are so surprised that the car doesn’t hit the brakes at all, and that’s because the car works quite differently than humans. So I think there is some short-term risk in that … we actually think they’re smarter than they are. And I think that will actually go away when the machines become smarter, but for now…”

Learning From Experience

As revolutionary as advanced AI might be, we can still learn from previous technological revolutions and draw on their lessons to prepare for the changes ahead.

Toby Walsh, a guest professor at Technical University of Berlin, expressed a common criticism of the principles, arguing that the Importance Principle could – and probably should – apply to many “groundbreaking technologies.”

He explained, “This is one of those principles where I think you could put any society-changing technology in place of advanced AI. … It would be true of the steam engine, in some sense it’s true of social media and we’ve failed at that one, it could be true of the Internet but we failed at planning that well. It could be true of fire too, but we failed on that one as well and used it for war. But to get back to the observation that some of them are things that are not particular to AI – once you realize that AI is going to be groundbreaking, then all of the things that should apply to any groundbreaking technology should apply.”

By looking back at these previous revolutionary technologies and understanding their impacts, perhaps we can gain insight into how we can plan ahead for advanced AI.

Dragan was also interested more explicit solutions to the problem of planning ahead.

“As the AI capabilities advance,” she told me, “we have to take a step back and ask ourselves: are we solving the right problem? Is there a better problem definition that will more likely result in benefits to humanity?

“For instance, we have always defined AI agents as rational. That means they maximize expected utility. Thus far, utility is assumed to be known. But if you think about it, there is no gospel specifying utility. We are assuming that some *person* somewhere will know exactly what utility to specify for their agent. Well, it turns out, we don’t work like that: it is really hard for people, including AI experts, to specify utility functions. We try our best, but when the system goes ahead and optimizes for what we inputted, the result is sometimes surprising, and not in a good way. This suggests that our definition of an AI agent is predicated on a wrong assumption. We’ve already started seeing that in robotics – the definition of how a robot should move didn’t account for people, the definition of how a robot should learn from demonstration assumed that people can provide perfect demonstrations to a robot, etc. – I assume we are going to see this more and more in AI as a whole. We have to stop making implicit assumptions about people and end-users of AI, and rigorously tackle that head-on, putting people into the equation.”

What Do You Think?

What kind of impact will advanced AI have on the development of human progress? How can we prepare for such potentially tremendous changes? Can we prepare? What other questions do we, as a society, need to ask?

This article is part of a weekly series on the 23 Asilomar AI Principles. The Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the weekly discussions about previous principles here.

This content was first published at futureoflife.org on February 24, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram