benefits and risks of artificial intelligence

Benefits & Risks of Artificial Intelligence

Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.

Max Tegmark, President of the Future of Life Institute

What is AI?

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Why research AI safety?

In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.

There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.

How can AI be dangerous?

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.

Why the recent interest in AI safety

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?

The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?

FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s position is that the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.

Recommended References

Videos

Media Articles

Essays by AI Researchers

Research Articles

Research Collections

Case Studies

Blog posts and talks

Books

Organizations

Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute; we are most grateful for the efforts that they have put into compiling it. These organizations above all work on computer technology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

5 replies
  1. Klaus Rohde
    Klaus Rohde says:

    The philosophy of Arthur Schopenhauer convincingly shows that the ‘Will’ (in his terminology), i.e. an innate drive, is at the basis of human behaviour. Our cognitive apparatus has evolved as a ‘servant’ of that ‘Will’. Any attempt to interpret human behaviour as primarily a system of computing mechanisms and our brain as a sort of computing apparatus is therefore doomed to failure. See here:

    https://krohde.wordpress.com/2016/05/27/artificial-intelligence-and-dangerous-robots-barking-up-the-wrong-tree/

    and

    https://krohde.wordpress.com/2016/04/10/intelligence-and-consciousness-artifical-intelligence-and-conscious-robots-soul-and-immortality/

    This implies that AI per se, since it does possess not an evolved innate drive (Will), cannot ‘attempt’ to replace humankind. It becomes dangerous only if humans, for example, engage in foolish biological engineering experiments to combine an evolved biological entity with an AI.

    Reply
  2. Michael Zeldich
    Michael Zeldich says:

    The programmed devises cannot be danger by itself. If it is designed to be DANGEROUS we have to blaim the designer, not machine.
    The real danger could be connected to use of independent artificial subjective systems. That kind of systems could be designed with predetermined goals and operational space, which could be chosen so that every goals from that set could be reached in the chosen prematurely operational space.
    That approach to design of the artificial systems is subject of second-order cybernetics, but I am already know how to chose these goals and operational space to satisfy these requirements.
    The danger exist because that kind of the artificial systems will not perceive humans as members of their society, and human moral rules will be null for them.
    That danger could be avoided if such systems will be designed so that they are will not have their own egoistic interests.
    That is real solution to the safety problem of so called AI systems.

    Reply
  3. Sumathy Ramesh
    Sumathy Ramesh says:

    “Understanding how the brain works is arguably one of the greatest scientific challenges of our time.
    “”
    –Alivisatos et al.[1]

    Lets keep it that way lest systems built to protect human rights on millenniums of wisdom is brought down by some artificial intelligence engineer trying to clock a milestone on their gantt chart!!!!

    I read about Obama’s support for the brain research initiatives several months ago with some interest. It even mildly sounded good; there are checks and balances ingrained in the systems of public funding for research, right from the application for funding, through grant approval, scope validation and ethics approval to the conduct of the research; there are systematic reviews of the methods and findings to spot weaknesses that would compromise the safety of the principles and the people involved; there are processes to evolve the checks and balances to ensure the continued safety of such principles and the people. The strength of the FDA, the MDD, the TGA and their likes in the developing nations is a testament to how the rigor of the conduct of the research and the regulations grow together so another initiative such as the development of atomic bomb are nibbled before they so much as think of budding!!!



    And then I read about the enormous engagement of the global software industry in the areas of Artificial Intelligence and Neuroscience. Theses are technological giants who sell directly to the consumers infatuated with technology more than anything else. they are pouring their efforts into artificial intelligence research for reasons as many as the number of individual engineering teams that’s charged to cross 1 mm of their mile long project plan! I’d be surprised if if any one of them has the bandwidth to think beyond the 1 mm that they have to cross, let alone the consequences of their collective effort on human rights!

    

I am worried.

    Given the pace of the industry’s engagement, I believe there is an immediate need for Bio-signal interface technical standards to be developed and established. These standards would serve as instruments to preserve the simple fact upon which every justice system in the world has been built viz., the brain and nervous system of an individual belongs to an individual and is not to be accessed by other individuals or machines with out stated consent for stated purposes.

    The standards will identify the frequency bands or pulse trains for exclusion in all research tools- software or otherwise, commercially available products, regulated devices, tools of trade, and communication infrastructure such that inadvertent breech of barriers to an individual’s brain and nervous system is prohibited. The standards will form a basis for international telecommunication infrastructure (including satellites and cell phone towers) to enforce compliance by electronically blocking and monitoring offending signals.

    Typically such standards are developed by international organizations with direct or indirect representation from industry stakeholders and adopted by the regulators of various countries over a period of one or more years. Subsequently they are adopted by the industry. The risk of noncompliance is managed on a case by case basis – the timing determinant on the extent of impact. Unfortunately this model will not be adequate for cutting edge technology with the ability to cause irreversible damage to the very fabric of the human society, if the technology becomes commonplace before the development of the necessary checks and balances. Development of tools to study the brain using electromagnetic energy based technology based on state of the art commercial telecommunication infrastructure is one such example. What we need is leadership to engage the regulators, academics as well as prominent players in the industry in the development of standards and sustainable solutions to enforce compliance and monitoring.

    The ray of hope I see at this stage is that artificial Wisdom is still a few years away because human wisdom is not coded in the layer of the neutron that the technology has the capacity to map.


    Reply
  4. Jeff Hershkowitz
    Jeff Hershkowitz says:

    How does society cope with an AI-driven reality where people are no longer needed or used in the work place?
    What happens to our socio-economic structure when people have little or no value in the work place?
    What will people do for value or contribution in order to receive income, in an exponentially growing population with inversely proportional fewer jobs and available resources?
    From my simple-minded perspective and connecting the dots to what seems a logical conclusion, we will soon live in a world bursting at the seams with overpopulation, where an individual has no marketable skill and is a social and economic liability to the few who own either technology or hard assets. This in turn will lead to a giant lower class, no middle class and a few elites who own the planet (not unlike the direction we are already headed).
    In such a society there will likely be little if any rights for the individual, and population control by whatever means will be the rule of the day.
    Seems like a doomsday or dark-age scenario to me..

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *