Contents
FLI February, 2017 Newsletter
Discussing the Future of AI
To ensure beneficial AI, we need broader insight.
Last month, as a result of the Beneficial AI conference, we released our 23 Asilomar AI Principles, which offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now it’s time to follow up with broad discussion about each individual principle.
To initiate the conversation, this month we began a weekly series that looks at each principle in depth and provides insight from various AI researchers. We want to encourage everyone to look over these articles and offer your own feedback about each principle. Narrow AI already effects us all, and it’s impact will only increase as the technology becomes smarter and more advanced. One of the best ways to ensure AI benefits as many people as possible is to get many more voices involved in the discussion.
We’ve covered four of the Principles so far. Please read about them and share your thoughts with us!
Value Alignment Principle:Â Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
Personal Privacy Principle:Â People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
Capability Caution Principle:Â There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
Importance Principle:Â Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
Don’t forget to follow us on SoundCloud and iTunes!
Podcast: Negotiating a Nuclear Weapons Ban at the UN
With Beatrice Fihn and Susi Snyder
Last October, the United Nations passed a historic resolution to begin negotiations on a treaty to ban nuclear weapons. In the 70 plus years of the UN, the countries have yet to agree on a treaty to completely ban nuclear weapons. The negotiations will begin this March. To discuss the importance of this event, we turned to Beatrice Fihn and Susi Snyder. Beatrice is the Executive Director of the International Campaign to Abolish Nuclear Weapons (ICAN), where she leads a global campaign consisting of about 450 NGOs to prohibit nuclear weapons. Susi is the Nuclear Disarmament Program Manager for PAX in the Netherlands, and the principal author of the Don’t Bank on the Bomb series. She is an International Steering Group member of ICAN. Listen to the podcast here or review the transcript here.
ICYMI: Other Popular Articles From February
Michael Wellman studies AI’s threats to the financial system. He explains, “The financial system is one of the leading edges of where AI is automating things, and it’s also an especially vulnerable sector. It can be easily disrupted, and bad things can happen.”
Fuxin Li and his team are working to improve the accuracy of neural networks under adversarial conditions. Their research focuses on the basic machine learning aspects of deep learning, and how to make general deep learning more robust.
What We’ve Been Up To This Month
Nearly all of our videos from the Asilomar BAI conference are now on YouTube, and you can also find them all in one location on our site. Speakers and panelists include Elon Musk, Stuart Russell, Bart Selman, Ray Kurzweil, David Chalmers, Sam Altman, Eric Schmidt, Francesca Rossi, Anca Dragan, and many, many more.
Richard Mallah attended the Origins Project at Arizona State University. A group of two to three dozen AI safety experts, cybersecurity experts, AI technologists, and technology beneficence advocates came together to discuss a variety of topics related to the near-term future of AI. Teams representing chaos and order for each of a variety of scenarios — which covered topics ranging from confused objectives within AIs to autonomous weapons to fake news — debated in exploratory and constructive dialogue.