A Look Back at 2017 and the Next Step for AI Safety
Advancing AI safety lies at the core of FLI’s mission, and 2017 was an exceptional year for safety. We hosted our Beneficial AI conference in Asilomar, Max Tegmark released his new book, Life 3.0, and FLI worked with various nonprofits and government agencies to push for a ban on lethal autonomous weapons.
But aside from these major events, we also witnessed the tireless work of our AI grant recipients, as they’ve addressed technical safety problems, organized interdisciplinary workshops on AI, and published papers and news articles about their research.
Now, FLI seeks to expand on that first successful round of grants with a new grant round in 2018!
For many years, artificial intelligence (AI) research has been appropriately focused on the challenge of making AI effective, with significant recent success, and great future promise. This recent success has raised an important question: how can we ensure that the growing power of AI is matched by the growing wisdom with which we manage it?
The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.
If you’re interested in applying for a grant, or you know someone who is, please follow this link.
FLI’s Biggest Events of 2017
The conference produced a set of 23 Principles to guide the development of safe and beneficial AI. The Principles have been signed and supported by over 1200 AI researchers and 2500 others, including Elon Musk and Stephen Hawking.
But the Principles were just a start to the conversation. After the conference, Ariel began a series that looks at each principle in depth and provides insight from various AI researchers. Artificial intelligence will affect people across every segment of society, so we want as many people as possible to get involved. To date, tens of thousands of people have read these articles. You can read them all here and join the discussion!
FLI helped develop this film to encourage the UN to begin negotiations for a ban on autonomous weapons and to ensure that they don’t lead to a destabilizing arms race. We’ve previously released two open letters calling for negotiations. Over 3700 AI researchers and 20,000 others signed the first letter in 2015, while the second, released this summer, was signed by leaders from top robotics companies around the world. Founders and CEOs of about 100 companies from 26 countries signed this second letter, which warns:
“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”
At the initial UN negotiations in March, FLI presented a letter of support for the ban that has been signed by 3700 scientists from 100 countries – including 30 Nobel Laureates, Stephen Hawking, and former US Secretary of Defense William Perry. And in June, FLI presented a five-minute video to the UN delegates featuring statements from signatories.
“Scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them and discovered that their effects are even more horrific than first thought”, the letter explains.
Formalizing a treaty to ban nuclear weapons is a big step towards a safer world, and in December, ICAN was awarded the Nobel Peace Prize for their work spearheading this effort.
AI will impact everyone, and Life 3.0 aims to expand the conversation around AI to include all people so that we can create a truly beneficial future. This page features the answers from the people who have taken the survey that goes along with Max’s book. To join the conversation yourself, please take the survey at ageofai.org.
Other Top Podcasts & Articles of 2017
Volunteers of the Year
What we’ve been up to
Lucas Perry, Meia Chita-Tegmark & Max Tegmark organized a one-day workshop with the Berggruen Institute & CIFAR on the the Ethics of Value Alignment right after NIPS, where AI-researchers, philosophers and other thought-leaders brainstormed about promising research directions. For example, if the technical value-alignment problem can be solved, then what values should AI be aligned with and through what process should these values be selected?
Richard Mallah participated in the Ethics of Value Alignment Workshop that FLI co-organized in Long Beach, CA. He alsogave a talk on AI safety entitled “Towards Robustness Criteria for Highly Capable AI” at the Q4 Boston Machine Intelligence Dinner hosted by Talla, attended by 50+ senior AI researchers and AI entrepreneurs. Richard additionally led a discussion group on AI safety at rationalist hub Macroscope in Montreal.
Jessica Cussins gave a talk on AI policy at Tencent in Palo Alto to a group that included Tencent researchers and legal scholars from China. She also gave a public comment at the San Mateo County Board of Supervisors meeting in which she supported the resolution calling on the United Nations to develop an international agreement restricting the development and use of lethal autonomous weapons.
Ariel Conn participated in the first discussion group of the N Square Innovators Network. The group is bringing together people from a diverse background to develop new methods and tools for addressing the nuclear threat and drawing greater public awareness to the issue.
Viktoriya Krakovna gave a talk at the NIPS conference on Interpretability for AI safety and Reinforcement Learning with a Corrupted Reward Channel. NIPS (Neural Information Processing Systems) is the biggest AI conference of the year, and over 8000 researchers attended this month’s event.