Discussing the Future of AI
To ensure beneficial AI, we need broader insight.
Last month, as a result of the Beneficial AI conference, we released our 23 Asilomar AI Principles, which offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now it’s time to follow up with broad discussion about each individual principle.
To initiate the conversation, this month we began a weekly series that looks at each principle in depth and provides insight from various AI researchers. We want to encourage everyone to look over these articles and offer your own feedback about each principle. Narrow AI already effects us all, and it’s impact will only increase as the technology becomes smarter and more advanced. One of the best ways to ensure AI benefits as many people as possible is to get many more voices involved in the discussion.
We’ve covered four of the Principles so far. Please read about them and share your thoughts with us!
With Beatrice Fihn and Susi Snyder
Last October, the United Nations passed a historic resolution to begin negotiations on a treaty to ban nuclear weapons. In the 70 plus years of the UN, the countries have yet to agree on a treaty to completely ban nuclear weapons. The negotiations will begin this March. To discuss the importance of this event, we turned to Beatrice Fihn and Susi Snyder. Beatrice is the Executive Director of the International Campaign to Abolish Nuclear Weapons (ICAN), where she leads a global campaign consisting of about 450 NGOs to prohibit nuclear weapons. Susi is the Nuclear Disarmament Program Manager for PAX in the Netherlands, and the principal author of the Don’t Bank on the Bomb series. She is an International Steering Group member of ICAN. Listen to the podcast here or review the transcript here.
ICYMI: Other Popular Articles From February
What We’ve Been Up To This Month
Richard Mallah attended the Origins Project at Arizona State University. A group of two to three dozen AI safety experts, cybersecurity experts, AI technologists, and technology beneficence advocates came together to discuss a variety of topics related to the near-term future of AI. Teams representing chaos and order for each of a variety of scenarios — which covered topics ranging from confused objectives within AIs to autonomous weapons to fake news — debated in exploratory and constructive dialogue.