Skip to content
All Newsletters

FLI June, 2017 Newsletter

Published
July 4, 2017
Author
Revathi Kumar

Contents

FLI June, 2017 Newsletter

Scientists Support Nuclear Ban: Video Shown at UN


Nobel Laureates and a Former Defense Secretary Among Scientists Adding Voices of Support to UN Nuke Ban

To help kick off the second round of negotiations to ban nuclear weapons, FLI presented this five minute video to the delegates at the United Nations. The video represents a follow up to the open letter that was signed by over 3700 scientists from 100 countries, all supporting the ban on nuclear weapons. Included in the video are statements by Former Secretary of Defense William Perry, Nobel laureates Edvard Moser (neuroscience), May-Britt Moser (neuroscience), Richard Roberts (biology) and Eric Kandel (neuroscience), along with Karen Hallberg (physics), David Wright (nuclear expert), Anthony Aguirre (physics), Max Tegmark (physics), Steven Pinker (psychology), Lisbeth Gronlund (nuclear expert), Christof Koch (neuroscience), Alan Robock (nuclear expert), Brian Toon (nuclear expert), and Lawrence Krauss (physics).




Support Grows for a UN Nuclear Weapons Ban

The US Conference of Mayors passed a resolution supporting the nuclear ban. In October of 2016, 123 countries voted to pursue these negotiations. Mayors for Peace, has swelled to “7,295 cities in 162 countries and regions, with 210 U.S. members, representing in total over one billion people.” A movement by the Hibakusha has led to a petition that was signed by nearly 3 million people in support of the ban. And over 3700 scientists from 100 countries signed an open letter in support of the ban negotiations. Here are some of the reason why all these people and organizations support negotiations at the United Nations.



U.S. Conference of Mayors Unanimously Adopts Mayors for Peace Resolution 

“Calling on President Trump to Lower Nuclear Tensions, Prioritize Diplomacy, and Redirect Nuclear Weapons Spending to meet Human Needs and Address Environmental Challenges”

Check us out on SoundCloud and iTunes!

Podcast: Banning Nuclear and Autonomous Weapons
with Miriam Struyk and Richard Moyes

How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? Assuming the nuclear weapons ban treaty is accepted, what happens then? How can we learn from experiences with previously banned weapons to try to get UN negotiations started with regard to lethal autonomous weapons? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is currently Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36. He’s worked closely with the International Campaign to Abolish Nuclear Weapons (ICAN), he helped found the Campaign to Stop Killer Robots, and he coined the phrase “meaningful human control” regarding autonomous weapons.

This Month’s Most Popular Articles



“The future of work is now,” says Moshe Vardi. “The impact of technology on labor has become clearer and clearer by the day.”

Machines have already automated millions of routine, working-class jobs in manufacturing. And now, AI is learning to automate non-routine jobs in transportation and logistics, legal writing, financial services, administrative support, and healthcare. Vardi, a computer science professor at Rice University, recognizes this trend and argues that AI poses a unique threat to human labor.




Using History to Chart the Future of AI: An Interview with Katja Grace
By Tucker Davey

The million-dollar question in AI circles is: When? When will artificial intelligence become so smart and capable that it surpasses human beings at every task? Given the unprecedented potential of AGI to create a positive or destructive future for society, many worry that humanity cannot afford to be surprised by its arrival. A surprise is not inevitable, however, and Katja Grace believes that if researchers can better understand the speed and consequences of advances in AI, society can prepare for a more beneficial outcome.


What we’ve been up to this month




Attending UN Negotiations to Ban Nuclear Weapons

Ariel Conn and Lucas Perry participated in the start of the second round of negotiations to ban nuclear weapons at the United Nations. They shared the video mentioned above, and also displayed the updated poster of the open letter supporting the ban negotiations, with all 3700+ signatories and 100 flags.





Jaan Tallinn speaks with Forbes and at Starmus International

FLI cofounder Jaan Tallinn gave a speech at Starmus International Festival this month, where he discussed the AI control problem and the current state of AI safety research. His speech was discussed in a Wired article. Forbes Austria interviewed Jaan at Pioneers ’17 this month as well, and posted a video to YouTube featuring Jaan’s thoughts on AI risks and safety.





Max Tegmark speaks at EA Global

On June 3rd, Max Tegmark spoke at Effective Altruism Global at Harvard University about the risks from AI and nuclear weapons. A video of his talk can be found here.


Richard Mallah participated in the second annual meeting of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, co-chairing the committee on artificial general intelligence considerations, and separately contributing to the deliberations of the committees on autonomous weapons and on operationalizing human wellbeing.

Richard also gave a talk and led discussion on nearer, intermediate, and longer term AI safety and beneficence considerations and approaches at the National Design Centre in Singapore.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram