Skip to content

AAAI Safety Workshop Highlights: Debate, Discussion, and Future Research

Published:
February 17, 2016
Author:
Ariel Conn

Contents

The 30th annual Association for the Advancement of Artificial Intelligence (AAAI) conference kicked off on February 12 with two days of workshops, followed by the main conference, which is taking place this week. FLI is honored to have been a part of the AI, Ethics, and Safety Workshop that took place on Saturday, February 13.

The workshop featured many fascinating talks and discussions, but perhaps the most contested and controversial was that by Toby Walsh, titled, “Why the Technological Singularity May Never Happen.”

Walsh explained that, though general knowledge has increased, human capacity for learning has remained relatively consistent for a very long time. “Learning a new language is still just as hard as it’s always been,” he said, to provide an example. If we can’t teach ourselves how to learn faster he doesn’t see any reason to believe that machines will be any more successful at the task.

He also argued that even if we assume that we can improve intelligence, there’s no reason to assume it will increase exponentially, leading to an intelligence explosion. He believes it is just as possible that machines could develop intelligence and learning that increases by half for each generation, thus it would increase, but not exponentially, and it would be limited.

Walsh does anticipate superintelligent systems, but he’s just not convinced they will be the kind that can lead to an intelligence explosion. In fact, as one of the primary authors of the Autonomous Weapons Open Letter, Walsh is certainly concerned about aspects of advanced AI, and he ended his talk with concerns about both weapons and job loss.

Both during and after his talk, members of the audience vocally disagreed, providing various arguments about why an intelligence explosion could be likely. Max Tegmark drew laughter from the crowd when he pointed out that while Walsh was arguing that a singularity might not happen, the audience was arguing that it might happen, and these “are two perfectly consistent viewpoints.”

Tegmark added, “As long as one is not sure if it will happen or it won’t, it’s wise to simply do research and plan ahead and try to make sure that things go well.”

As Victoria Krakovna has also explained in a previous post, there are other risks associated with AI that can occur without an intelligence explosion.

The afternoon portion of the talks were all dedicated to technical research by current FLI grant winners, including Vincent Conitzer, Fuxin Li, Francesca Rossi, Bas Steunebrink, Manuela Veloso, Brian Ziebart, Jacob Steinhardt, Nate Soares, Paul Christiano, Stefano Ermon, and Benjamin Rubinstein. Topics ranged from ensuring value alignments between humans and AI to safety constraints and security evaluation, and much more.

While much of the research presented will apply to future AI designs and applications, Li and Rubinstein presented examples of research related to image recognition software that could potentially be used more immediately.

Li explained the risks associated with visual recognition software, including how someone could intentionally modify the image in a human-imperceptible way to make it incorrectly identify the image. Current methods rely on machines accessing huge quantities of images to reference and learn what any given image is. However, even the smallest perturbation of the data can lead to large errors. Li’s own research looks at unique ways for machines to recognize an image, thus limiting the errors.

Rubinstein’s focus is geared more toward security. The research he presented at the workshop is similar to facial recognition, but goes a step farther, to understand how small changes made to one face can lead systems to confuse the image with that of someone else.

Fuxin Li

Fuxin Li

rubinstein_AAAI

Ben Rubinstein

AAAI_panel

Future of beneficial AI research panel: Francesca Rossi, Nate Soares, Tom Dietterich, Roman Yampolskiy, Stefano Ermon, Vincent Conitzer, and Benjamin Rubinstein.

The day ended with a panel discussion on the next steps for AI safety research that also drew much debate between panelists and the audience. The panel included AAAI president, Tom Dietterich, as well as Rossi, Soares, Conitzer, Ermon, Rubinstein, and Roman Yampolskiy, who also spoke earlier in the day.

Among the prevailing themes were concerns about ensuring that AI is used ethically by its designers, as well as ensuring that a good AI can’t be hacked to do something bad. There were suggestions to build on the idea that AI can help a human be a better person, but again, concerns about abuse arose. For example, an AI could be designed to help voters determine which candidate would best serve their needs, but then how can we ensure that the AI isn’t secretly designed to promote a specific candidate?

Judy Goldsmith, sitting in the audience, encouraged the panel to consider whether or not an AI should be able to feel pain, which led to extensive discussion about the pros and cons of creating an entity that can suffer, as well as questions about whether such a thing could be created.

Francesca_Nate

Francesca Rossi and Nate Soares

Tom_Roman

Tom Dietterich and Roman Yampolskiy

After an hour of discussion many suggestions for new research ideas had come up, giving researchers plenty of fodder for the next round of beneficial-AI grants.

We’d also like to congratulate Stuart Russell and Peter Norvig who were awarded the 2016 AAAI/EAAI Outstanding Educator Award for their seminal text “Artificial Intelligence: A Modern Approach.” As was mentioned during the ceremony, their work “inspired a new generation of scientists and engineers throughout the world.”

Norig_Russell_3

Congratulations to Peter Norvig and Stuart Russell!

This content was first published at futureoflife.org on February 17, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram