AAAI Safety Workshop Highlights: Debate, Discussion, and Future Research

The 30th annual Association for the Advancement of Artificial Intelligence (AAAI) conference kicked off on February 12 with two days of workshops, followed by the main conference, which is taking place this week. FLI is honored to have been a part of the AI, Ethics, and Safety Workshop that took place on Saturday, February 13.

phoenix_convention_center1

Phoenix Convention Center where AAAI 2016 is taking place.

The workshop featured many fascinating talks and discussions, but perhaps the most contested and controversial was that by Toby Walsh, titled, “Why the Technological Singularity May Never Happen.”

Walsh explained that, though general knowledge has increased, human capacity for learning has remained relatively consistent for a very long time. “Learning a new language is still just as hard as it’s always been,” he said, to provide an example. If we can’t teach ourselves how to learn faster he doesn’t see any reason to believe that machines will be any more successful at the task.

He also argued that even if we assume that we can improve intelligence, there’s no reason to assume it will increase exponentially, leading to an intelligence explosion. He believes it is just as possible that machines could develop intelligence and learning that increases by half for each generation, thus it would increase, but not exponentially, and it would be limited.

Walsh does anticipate superintelligent systems, but he’s just not convinced they will be the kind that can lead to an intelligence explosion. In fact, as one of the primary authors of the Autonomous Weapons Open Letter, Walsh is certainly concerned about aspects of advanced AI, and he ended his talk with concerns about both weapons and job loss.

Both during and after his talk, members of the audience vocally disagreed, providing various arguments about why an intelligence explosion could be likely. Max Tegmark drew laughter from the crowd when he pointed out that while Walsh was arguing that a singularity might not happen, the audience was arguing that it might happen, and these “are two perfectly consistent viewpoints.”

Tegmark added, “As long as one is not sure if it will happen or it won’t, it’s wise to simply do research and plan ahead and try to make sure that things go well.”

As Victoria Krakovna has also explained in a previous post, there are other risks associated with AI that can occur without an intelligence explosion.

The afternoon portion of the talks were all dedicated to technical research by current FLI grant winners, including Vincent Conitzer, Fuxin Li, Francesca Rossi, Bas Steunebrink, Manuela Veloso, Brian Ziebart, Jacob Steinhardt, Nate Soares, Paul Christiano, Stefano Ermon, and Benjamin Rubinstein. Topics ranged from ensuring value alignments between humans and AI to safety constraints and security evaluation, and much more.

While much of the research presented will apply to future AI designs and applications, Li and Rubinstein presented examples of research related to image recognition software that could potentially be used more immediately.

Li explained the risks associated with visual recognition software, including how someone could intentionally modify the image in a human-imperceptible way to make it incorrectly identify the image. Current methods rely on machines accessing huge quantities of images to reference and learn what any given image is. However, even the smallest perturbation of the data can lead to large errors. Li’s own research looks at unique ways for machines to recognize an image, thus limiting the errors.

Rubinstein’s focus is geared more toward security. The research he presented at the workshop is similar to facial recognition, but goes a step farther, to understand how small changes made to one face can lead systems to confuse the image with that of someone else.

Fuxin Li

Fuxin Li

rubinstein_AAAI

Ben Rubinstein

 

 

AAAI_panel

Future of beneficial AI research panel: Francesca Rossi, Nate Soares, Tom Dietterich, Roman Yampolskiy, Stefano Ermon, Vincent Conitzer, and Benjamin Rubinstein.

The day ended with a panel discussion on the next steps for AI safety research that also drew much debate between panelists and the audience. The panel included AAAI president, Tom Dietterich, as well as Rossi, Soares, Conitzer, Ermon, Rubinstein, and Roman Yampolskiy, who also spoke earlier in the day.

Among the prevailing themes were concerns about ensuring that AI is used ethically by its designers, as well as ensuring that a good AI can’t be hacked to do something bad. There were suggestions to build on the idea that AI can help a human be a better person, but again, concerns about abuse arose. For example, an AI could be designed to help voters determine which candidate would best serve their needs, but then how can we ensure that the AI isn’t secretly designed to promote a specific candidate?

Judy Goldsmith, sitting in the audience, encouraged the panel to consider whether or not an AI should be able to feel pain, which led to extensive discussion about the pros and cons of creating an entity that can suffer, as well as questions about whether such a thing could be created.

Francesca_Nate

Francesca Rossi and Nate Soares

Tom_Roman

Tom Dietterich and Roman Yampolskiy

After an hour of discussion many suggestions for new research ideas had come up, giving researchers plenty of fodder for the next round of beneficial-AI grants.

We’d also like to congratulate Stuart Russell and Peter Norvig who were awarded the 2016 AAAI/EAAI Outstanding Educator Award for their seminal text “Artificial Intelligence: A Modern Approach.” As was mentioned during the ceremony, their work “inspired a new generation of scientists and engineers throughout the world.”

Norig_Russell_3

Congratulations to Peter Norvig and Stuart Russell!

1 reply
  1. Cheng-Zhong Su
    Cheng-Zhong Su says:

    How to Calculate Thinking Speed?
    Recently, more and more people are worried about the artificial intelligent will overtake the human’s wisdom. All topic will focus on one issue, how to match the speed of these two operation. For computer, we know how to calculate its speed, but for human’s mind no one has ever thought about it. Now let me show you my method. It may need some revise and supplementary I hope you may help me. I hope it will lead to a deeper discussion, not just some emotional words.
    Suppose there is a spoken language, it just have two sounds (phones not syllables we may regard syllable as combination of sounds) A and B, we know, it can express this world as well as any language (similar the 0, 1 language). Yet, its expressing speed is too slow. For instance Tom uses this AB language, Jack uses a natural language with 400 sounds, supposing there are only 400 things that required to be named, then sometimes, Tom has to use 9 sounds (as ABBABBAAB) to express one thing, while Jack use 1 sound to express the same thing.
    Since each sounds cost the same time, then the difference is during the whole life, Tom can only enjoy 1/9 of Jack’s information (including speaking and hearing). Or we may say that the AB language user has to have nine lives to get the information that a natural language user got in one life.
    Yet in the world, the quantity of sounds in every language is different. So just counting the number of sounds in each language, we will found the different efficiency of them. In other word, languages in the world is not equal.
    Since the thinking speed is a sort of ‘speaking in mind’, so the speed of speaking reflects the speed of thinking, we may roughly calculate the thinking speed in this way, of course a parameter may be introduced.
    This idea tells us that the development of any language is in fact finding more distinguishable sounds. Since all language towards this target, we may united them into one language by good guide, in future.
    We always worried about the fast developing speed of computer, yet no body care about the speed of human’s mind is developing too.
    Exponential function tells us in computer the base number is settled as 2 (0, 1), so we can only increase the exponent number per second to accelerate its speed. For human being, every second we can only send 5 to 6 audible sounds, so the exponent number is settled, the only way to accelerate our communication speed is increase the base number that is the species of sound. In computer, every operation means choosing 1 from 2, but in a human language with 400 sounds, every operation is choosing 1 from 400. Comparing the in used sounds for various language we may get some imagination.
    The ancient Phoenician using 22 sounds, the Japanese using around 100 sounds, the English using around 400 sounds and Chinese Mandarin using 1186 sounds.
    When, I list this, many people will ask two questions: 1st, how can the Mandarin has so many sounds? The answer is easy, for it is a tone language, just think about when you sing a song, you can utter every syllable according eight or sixteen different music notes that is tone. It also makes phonetic difference in speech. The 2nd question, is that the Chinese Mandarin has a faster thinking speed than English? That is not true. Unlike computer, in human’s mind, thinking speed and memory is a dynamic balance. High speed language can consume some speed to get larger quantity of memory but low speed language can’t. The Chinese language has consumed its speed for memory. So, today English using one million words to express the world yet the Chinese people using three thousand characters to express the same world.
    How the expressing speed transformed into memory? To answer this question we have to explain what is sound. Sound is not syllable, a sound makes only one peak in ‘voice memo’ of your mobile. While a syllable may cause more than one peak. Try the syllable ‘sprint’, it will show three peaks. For English speaker, every second uttering 5 sounds mean the expressing speed is 400 to the 5 power per second, it is quite a big figure. Since every language is in taking more species of sounds gradually, the thinking speed is increased too. Normally, a sound is made by ether a consonant, a vowel and tone or a vowel and a tone. Any way, it has to have a vowel and a tone.
    Now we explain how the expressing speed can transform into memory. The example is the word ‘alto’, in the dictionary we found the annotation is “lowest female voice”. Then why we don’t use this phrase to stand the meaning of ‘alto’ but to create a dictionary to explain it? The only reason is that the phrase cost too much sounds, lo-we-s-t-fe-ma-le-voi-ce, nine sounds while a-l-to cost three sounds; the gap is six sounds. If in a theatre someone using this meaning 100 times per day, it cost him 600 extra sounds. But in Chinese Mandarin, they did use the phrase stand this word, for each of the three words of “lowest female voice”, expressed by one sound as ‘nu gao yin’. In that way the expressing speed transform into memory. Or we may say, they don’t need to know a word as ‘alto’ but they can express and understand its meaning as well. It similar the computer transfer a subroutine.
    When human cognize a word, they don’t like computer, that remember it immediately. People understand a word by meet it repeatedly and getting impression step by step. Between two times, many things may happen upon this person. That is to say, the second time meet the same word it will recall him a different feeling compare with the first time. This phenomenon caused the imagination and inspiration of human being. This process is unlike to happen in computer. And until now the computer didn’t have even one word like this in its mind.

Comments are closed.