FLI June, 2017 Newsletter

Scientists Support Nuclear Ban: Video Shown at UN

Nobel Laureates and a Former Defense Secretary Among Scientists Adding Voices of Support to UN Nuke Ban

To help kick off the second round of negotiations to ban nuclear weapons, FLI presented this five minute video to the delegates at the United Nations. The video represents a follow up to the open letter that was signed by over 3700 scientists from 100 countries, all supporting the ban on nuclear weapons. Included in the video are statements by Former Secretary of Defense William Perry, Nobel laureates Edvard Moser (neuroscience), May-Britt Moser (neuroscience), Richard Roberts (biology) and Eric Kandel (neuroscience), along with Karen Hallberg (physics), David Wright (nuclear expert), Anthony Aguirre (physics), Max Tegmark (physics), Steven Pinker (psychology), Lisbeth Gronlund (nuclear expert), Christof Koch (neuroscience), Alan Robock (nuclear expert), Brian Toon (nuclear expert), and Lawrence Krauss (physics).

Support Grows for a UN Nuclear Weapons Ban

The US Conference of Mayors passed a resolution supporting the nuclear ban. In October of 2016, 123 countries voted to pursue these negotiations. Mayors for Peace, has swelled to “7,295 cities in 162 countries and regions, with 210 U.S. members, representing in total over one billion people.” A movement by the Hibakusha has led to a petition that was signed by nearly 3 million people in support of the ban. And over 3700 scientists from 100 countries signed an open letter in support of the ban negotiations. Here are some of the reason why all these people and organizations support negotiations at the United Nations.

U.S. Conference of Mayors Unanimously Adopts Mayors for Peace Resolution 

“Calling on President Trump to Lower Nuclear Tensions, Prioritize Diplomacy, and Redirect Nuclear Weapons Spending to meet Human Needs and Address Environmental Challenges”

Check us out on SoundCloud and iTunes!

Podcast: Banning Nuclear and Autonomous Weapons

with Miriam Struyk and Richard Moyes

How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? Assuming the nuclear weapons ban treaty is accepted, what happens then? How can we learn from experiences with previously banned weapons to try to get UN negotiations started with regard to lethal autonomous weapons? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is currently Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36. He’s worked closely with the International Campaign to Abolish Nuclear Weapons (ICAN), he helped found the Campaign to Stop Killer Robots, and he coined the phrase “meaningful human control” regarding autonomous weapons.

This Month’s Most Popular Articles

“The future of work is now,” says Moshe Vardi. “The impact of technology on labor has become clearer and clearer by the day.”

Machines have already automated millions of routine, working-class jobs in manufacturing. And now, AI is learning to automate non-routine jobs in transportation and logistics, legal writing, financial services, administrative support, and healthcare. Vardi, a computer science professor at Rice University, recognizes this trend and argues that AI poses a unique threat to human labor.

Using History to Chart the Future of AI: An Interview with Katja Grace
By Tucker Davey

The million-dollar question in AI circles is: When? When will artificial intelligence become so smart and capable that it surpasses human beings at every task? Given the unprecedented potential of AGI to create a positive or destructive future for society, many worry that humanity cannot afford to be surprised by its arrival. A surprise is not inevitable, however, and Katja Grace believes that if researchers can better understand the speed and consequences of advances in AI, society can prepare for a more beneficial outcome.

What we’ve been up to this month

Attending UN Negotiations to Ban Nuclear Weapons

Ariel Conn and Lucas Perry participated in the start of the second round of negotiations to ban nuclear weapons at the United Nations. They shared the video mentioned above, and also displayed the updated poster of the open letter supporting the ban negotiations, with all 3700+ signatories and 100 flags.

Jaan Tallinn speaks with Forbes and at Starmus International

FLI cofounder Jaan Tallinn gave a speech at Starmus International Festival this month, where he discussed the AI control problem and the current state of AI safety research. His speech was discussed in a Wired article. Forbes Austria interviewed Jaan at Pioneers ’17 this month as well, and posted a video to YouTube featuring Jaan’s thoughts on AI risks and safety.

Max Tegmark speaks at EA Global

On June 3rd, Max Tegmark spoke at Effective Altruism Global at Harvard University about the risks from AI and nuclear weapons. A video of his talk can be found here.

Richard Mallah participated in the second annual meeting of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, co-chairing the committee on artificial general intelligence considerations, and separately contributing to the deliberations of the committees on autonomous weapons and on operationalizing human wellbeing.

Richard also gave a talk and led discussion on nearer, intermediate, and longer term AI safety and beneficence considerations and approaches at the National Design Centre in Singapore.