Preparing for the Biggest Change in Human History

Importance Principle: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

In the history of human progress, a few events have stood out as especially revolutionary: the intentional use of fire, the invention of agriculture, the industrial revolution, possibly the invention of computers and the Internet. But many anticipate that the creation of advanced artificial intelligence will tower over these achievements.

In a popular post, Tim Urban with Wait But Why wrote that artificial intelligence is “by far THE most important topic for our future.

Or, as AI professor Roman Yampolskiy told me, “Design of human-level AI will be the most impactful event in the history of humankind. It is impossible to over-prepare for it.”

The Importance Principle encourages us to plan for what could be the greatest “change in the history of life.” But just what are we preparing for? What will more advanced AI mean for society? I turned to some of the top experts in the field of AI to consider these questions.

Societal Benefits?

Guruduth Banavar, the Vice President of IBM Research, is hopeful that as AI advances, it will help humanity advance as well. In favor of the principle, he said, “I strongly believe this. I think this goes back to evolution. From the evolutionary point of view, humans have reached their current level of power and control over the world because of intelligence. … AI is augmented intelligence – it’s a combination of humans and AI working together. And this will produce a more productive and realistic future than autonomous AI, which is too far out. In the foreseeable future, augmented AI – AI working with people – will transform life on the planet. It will help us solve the big problems like those related to the environment, health, and education.”

“I think I also agreed with that one,” said Bart Selman, a professor at Cornell University. “Maybe not every person on earth should be concerned about it, but there should be, among scientists, a discussion about these issues and a plan – can you build safety guidelines to work with value alignment work? What can you actually do to make sure that the developments are beneficial in the end?”

Anca Dragan, an assistant professor at UC Berkeley, explained, “Ultimately, we work on AI because we believe it can have a strong positive impact on the world. But the more capable the technology becomes, the easier it becomes to misuse it – or perhaps, the effects of misusing it become more drastic. That is why it is so important, as we make progress, to start thinking more strongly about what role AI will play.”

Short-term Concerns

Though the Importance Principle specifically mentions advanced AI, some of the researchers I interviewed pointed out that nearer-term artificial intelligence could also drastically impact humanity.

“I believe that AI will create profound change even before it is ‘advanced’ and thus we need to plan and manage growth of the technology,” explained Kay Firth-Butterfield, Executive Director of AI-Austin.org. “As humans, we are not good at long-term planning because our civil systems don’t encourage it, however, this is an area in which we must develop our abilities to ensure a responsible and beneficial partnership between man and machine.”

Stefano Ermon, an assistant professor at Stanford University, also considered the impacts of less advanced AI, saying, “It’s an incredibly powerful technology. I think it’s even hard to imagine what one could do if we are able to develop a strong AI, but even before that, well before that, the capabilities are really huge. We’ve seen the kind of computers and information technologies we have today, the way they’ve revolutionized our society, our economy, our everyday lives. And my guess is that AI technologies would have the potential to be even more impactful and even more revolutionary on our lives. And so I think it’s going to be a big change and it’s worth thinking very carefully about, although it’s hard to plan for it.”

In a follow up question about planning for AI over the shorter term, Selman added, “I think the effect will be quite dramatic. This is another interesting point – sometimes AI scientists say, well it might not be advanced AI will do us in, but dumb AI. … The example is always the self-driving car has no idea it’s driving you anywhere. It doesn’t even know what driving is. … If you looked at the videos of an accident that’s going to happen, people are so surprised that the car doesn’t hit the brakes at all, and that’s because the car works quite differently than humans. So I think there is some short-term [AI] risk in that … we actually think they’re smarter than they are. And I think that will actually go away when the machines become smarter, but for now…”

Learning From Experience

As revolutionary as advanced AI might be, we can still learn from previous technological revolutions and draw on their lessons to prepare for the changes ahead.

Toby Walsh, a guest professor at Technical University of Berlin, expressed a common criticism of the principles, arguing that the Importance Principle could – and probably should – apply to many “groundbreaking technologies.”

He explained, “This is one of those principles where I think you could put any society-changing technology in place of advanced AI. … It would be true of the steam engine, in some sense it’s true of social media and we’ve failed at that one, it could be true of the Internet but we failed at planning that well. It could be true of fire too, but we failed on that one as well and used it for war. But to get back to the observation that some of them are things that are not particular to AI – once you realize that AI is going to be groundbreaking, then all of the things that should apply to any groundbreaking technology should apply.”

By looking back at these previous revolutionary technologies and understanding their impacts, perhaps we can gain insight into how we can plan ahead for advanced AI.

Dragan was also interested more explicit solutions to the problem of planning ahead.

“As the AI capabilities advance,” she told me, “we have to take a step back and ask ourselves: are we solving the right problem? Is there a better problem definition that will more likely result in benefits to humanity?

“For instance, we have always defined AI agents as rational. That means they maximize expected utility. Thus far, utility is assumed to be known. But if you think about it, there is no gospel specifying utility. We are assuming that some *person* somewhere will know exactly what utility to specify for their agent. Well, it turns out, we don’t work like that: it is really hard for people, including AI experts, to specify utility functions. We try our best, but when the system goes ahead and optimizes for what we inputted, the result is sometimes surprising, and not in a good way. This suggests that our definition of an AI agent is predicated on a wrong assumption. We’ve already started seeing that in robotics – the definition of how a robot should move didn’t account for people, the definition of how a robot should learn from demonstration assumed that people can provide perfect demonstrations to a robot, etc. – I assume we are going to see this more and more in AI as a whole. We have to stop making implicit assumptions about people and end-users of AI, and rigorously tackle that head-on, putting people into the equation.”

What Do You Think?

What kind of impact will advanced AI have on the development of human progress? How can we prepare for such potentially tremendous changes? Can we prepare? What other questions do we, as a society, need to ask?

This article is part of a weekly series on the 23 Asilomar AI Principles. The Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the weekly discussions about previous principles here.

8 replies
  1. Calum Chace
    Calum Chace says:

    During this century humanity will almost certainly go through two singularities. The well-known one is the technological singularity, in which we create an AGI which becomes a superintelligence, and we probably either become godlike or go extinct.

    Fortunately, some very clever people (including ones at FLI and funded by FLI) are already working out how to make sure the first superintelligence is well-disposed towards humanity.

    The other singularity is less well-known – the economic singularity. We don’t know for sure, but it looks very likely that in a couple of decades machines will do many or most of the jobs that people do. Maybe we will create lots of new jobs for the humans, but it is entirely possible that we won’t. In that case we will have to de-couple income from jobs.

    This will require a very different type of economy from modern capitalism which has served us increasingly well for most of the last century. And here we have a problem: very few people are thinking seriously about this. I argue this in detail in my book, The Economic Singularity.

    Reply
    • Mindey
      Mindey says:

      A technological singularity has already occurred at least once in history – when neural cell was “invented”. Before that, the adaptation to new environmental conditions required many generations of evolutionary trial and error. Compared to that, the jump in speed at which neurons started to model the world appeared to be near-infinitely fast.

      Now we are routinely moving at this relatively high speed (we are thinking)… But when we will re-wire ourselves at least electronically (it has already begun) , we are about to experience the huge jump again. Pre-singularity — it takes generations for many people to come up with a cure for diseases like cancer., post-singularity — problems of comparable complexity take fractions of a second for a single node of us come up with a solution.

      I fail to see how “Economic Singularity” is on a par with technological ones. The questions that arise to me though, — are we going to experience technological singularities ever more freuquently? If so, how much more frequently? If we are thinking faster, relatively, the subjective time will flow slower, and we may still feel like they are subjectively not more frequent.

      Reply
    • Tom Aaron
      Tom Aaron says:

      The first singularity may be invented in the basement of a house in Spokane or a bedroom in some town in China. We underestimate the ability of 10 thousand techie geeks…(Apple, Microsoft, Napster, Facebook) to be the first to achieve true AI. The first airplane was flown by two unknown bicycle mechanics. The first guided rockets by schoolboys.

      Reply
  2. Patrick Thiele
    Patrick Thiele says:

    The near term societal impacts of the impending AI/automation/robotics/bio & nanotech revolution can be anticipated. Generally they range from mass unemployment with the owners of the means of production keeping most of the benefits to abundance and egalitarianism. History would indicate the former more likely than the latter.

    Beyond that is truly a Pandora’s Box. AGI is qualitatively different from any technology we have experience with, yet mankind cannot turn aside from the pursuit of it. Whether AGI followed by ASI is friendly or not, humanity as we know it will cease to exist. We will be destroyed or be changed beyond recognition

    Reply
  3. lubomir todorov
    lubomir todorov says:

    In the whole process of evolution, human species never had a viable strategy of individual survival: we can’t run fast enough to escape a cheetah, or kill a mammoth for dinner in a one-on-one fight.
    Humans became a success story and reached the current Anthropocene age, riding on an initial version of group survival strategy: the one enlightened by the torch of Tribal Thinking.
    As a result, our 21st century world is intensely globalised and technologically highly advanced, but we are still Humankind decomposed into tribes that seek dominance in the way they did thousands of years ago.
    This reality has significant negative consequences on
    The ultimate existential meaning of human civilization: securing the long term self-interest of each individual human being
    For the following reasons:
    The ubiquitous political culture of Tribal Thinking has over and over generated the greatest anthropogenic disasters in human history: wars and armed conflicts among various state and non-state actors have ended with tens of millions human lives lost and material culture worth tens of trillions US$ destroyed; The total economic impact of violence in 2015 reached $13.6 trillion in 2015, or 13.3 per cent of global GDP.
    Driven by fear of real confrontation, current world annual military expenditure soared to trillions of US$;
    World economy suffers astronomical losses due to the current rigid rules that lock edge technologies exclusively for military and security purposes and does not allow their use in mass large-scale production;
    The advance itself of human civilization is significantly slowed down by the fact that top IQ human brain potential of the world are employed in the non-productive areas related to defense and security;
    And, most importantly, the real time cutting-edge military capabilities of the weapons deployed around the world have more than enough power to self-destroy Humankind.
    Which means that by still operating in the Tribal Thinking version of group survival strategy, in civilizational terms, we have made further technological advance of human society not only obsolete, but also a threat to our very existence as human species on planet Earth.
    All we need to solve this existential conundrum is a new version of group survival strategy: the Civilizational Thinking.
    Civilizational Thinking is not about creating new ideology or new religion. Civilizational Thinking is only about re-organizing the global political space environment into a universal multifaceted platform, on which all polities in their ideological, political, national, cultural, ethnic, religious, racial, etc. diversity exist, interrelate and compete with each other on non-violence basis – by following incontestable rules that are optimized to protect core values of human civilization.

    Reply
    • CWM
      CWM says:

      CivilizationalThinking — great title for a poem.

      Is there any evidence in the annals of our existence which do not point to mere rearrangements of whom arranges the concentrations of power of one over others, even within their own tribe?

      Reply
  4. Luke Russo
    Luke Russo says:

    We clearly have a choice. Option A follow the natural path of human nature which will lead us to violence between the haves and have nots. Option B we strategically start to implement population control to parallel the projections of job opportunities in a post AI implemented society.

    Reply
  5. Kymberly East
    Kymberly East says:

    AI and robotics have been funded by tax paying citizens who are stuck footing the bill without benefiting, in more ways than one. Any praise of these corporate billionaire’s funded enterprises are blinded by the romance of technology to the detriment of us all. The scorn directed toward the majority of the human population is difficult to swallow. I for one, and many, many like me, have no taste for it. We can discuss population control, that is something overdue, but we must also recognize the catastrophic perils imposed upon innocents who suddenly find themselves in the cross hairs of rapid-fire technological advancements. Technology does not need to be a villain in the story of human-kind. It can be utilized to do so much more than destabilize economies and compromise the health of the planet. We can discuss that what we are experiencing in dwindling educational and employment opportunities is a direct result of human nature, but I would go deeper and declare that this “nature” can be attributed to a minority of persons inhabiting this planet, who have leap-frogged their way to power on the backs of ordinary citizens, and risk nothing in furthering their own interests (“No skin in the game,” as economist Mark Blyth reiterates). Their ignorance and their alienation drive an agenda they advertise as the birth of a new age, carelessly overlooking the need for a humane transition to that wistfully described utopia, where the average person will suddenly find a liberty from labor, and consequently all the time never before available to become an artist. Not with a bang, but a whimper, as Eliot stated, eh?

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *