Skip to content

X-risk News of the Week: AAAI, Beneficial AI Research, a $5M Contest, and Nuclear Risks

Published:
February 19, 2016
Author:
Ariel Conn

Contents

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity.
X-hope = Existential Hope. The hope that we will all flourish and live happily ever after.

The highlights of this week’s news are all about research. And as is so often the case, research brings hope. Research can help us cure disease, solve global crises, find cost-effective solutions to any number of problems, and so on. The research news this week gives hope that we can continue to keep AI beneficial.

First up this week was the AAAI conference. As was mentioned in an earlier post, FLI participated in the AAAI workshop, AI, Ethics, and Safety. Eleven of our grant winners presented their research to date, for an afternoon of talks and discussion that focused on building ethics into AI systems, ensuring safety constraints are in place, understanding how and when things could go wrong, ensuring value alignment between humans and AI, and much more. There was also a lively panel discussion about new ideas for future AI research that could help ensure AI remains safe and beneficial.

The next day, AAAI President, Tom Dietterich (also an FLI grant recipient), delivered his presidential address with a focus on enabling more research into robust AI. He began with a Marvin Minsky quote, in which Minsky explained that when a computer encounters an error, it fails, whereas when the human brain encounters an error, it tries another approach. And with that example, Dietterich launched into his speech about the importance of robust AI and ensuring that an AI can address the various known and unknown problems it may encounter. While discussing areas in which AI development is controversial, he also made a point to mention his opposition to autonomous weapons, saying, “I share the concerns of many people that I think the development of autonomous offensive weapons, without a human in the loop, is a step that we should not take.”

AAAI also hosted a panel this week on the economic impact of AI, which included FLI Scientific Advisory Board members, Nick Bostrom and Erik Brynjofsson, as well as an unexpected appearance by FLI President, Max Tegmark. As is typical of such discussions, there was a lot of concern about the future of jobs and how average workers will continue to make a living. However, the TechRepublic noted that both Bostrom and Tegmark are hopeful that if we plan appropriately, then the increased automation could greatly improve our standard of living. As the TechRepublic reported:

“’Perhaps,’ Bostrom said, ‘we should strive for things outside the economic systems.’ Tegmark agreed. ‘Maybe we need to let go of the obsession that we all need jobs.’”

Also this week, IBM and the X Prize Foundation announced a $5 million collaboration, in which IBM is encouraging developers and researchers to use Watson as the base for creating “jaw-dropping, awe-inspiring” new technologies that will be presented during TED2020. There will be interim prizes for projects leading up to that event, while the final award will be presented after the TED2020 talks. As they explain on the X Prize page:

“IBM believes this competition can accelerate the creation of landmark breakthroughs that deliver new, positive impacts to peoples’ lives, and the transformation of industries and professions.

We believe that cognitive technologies like Watson represent an entirely new era of computing, and that we are forging a new partnership between humans and technology that will enable us to address many of humanity’s most significant challenges — from climate change, to education, to healthcare.”

Of course, not all news can be good news, and so the week’s highlights end with a reminder about the increasing threat of nuclear weapons. Last week, the Union of Concerned Scientists published a worrisome report about the growing concern that a nuclear war is becoming more likely. Among other things, the report considers the deteriorating relationship between Russia and the U.S., as well as the possibility that China may soon implement a hair-trigger-alert policy for their own nuclear missiles.

David Wright, co-director of the UCS Global Security Program, recently wrote a blog post about the report. Referring to first the U.S.-Russia concern and then the Chinese nuclear policy, he wrote:

“A state of heightened tension changes the context of a false alarm, should one occur, and tends to increase the chance that the warning will be seen as real. […] Should China’s political leaders agree with this change, it would be a dangerous shift that would increase the chance of an accidental or mistaken launch at the United States.”

Update: Another FLI grant winner, Dr. Wendell Wallach, made news this week for his talk at the Association for the Advancement of Science, in which he put forth a compromise for addressing the issue of autonomous weapons. According to Defense One, Wallach laid out three ideas:

“1) An executive order from the president proclaiming that lethal autonomous weapons constitute a violation of existing international humanitarian law.”

“2) Create an oversight and governance coordinating committee for AI.”

“3) Direct 10 percent of the funding in artificial intelligence to studying, shaping, managing and helping people adapt to the “societal impacts of intelligent machines.”

This content was first published at futureoflife.org on February 19, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram