Skip to content
All Newsletters

FLI September, 2018 Newsletter

Published
30 September, 2018
Author
Revathi Kumar

Contents

FLI September, 2018 Newsletter

$50,000 Future of Life Award Goes to Stanislav Petrov: The Man Who Saved the World

Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film The Man Who Saved the World), Max Tegmark (FLI)

To celebrate that September 26th, 2018 is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983, was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.

Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”

Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say. Read the full story here.

Also in attendance at the ceremony was Stephen Mao who helped produce the movie about Petrov, “The Man Who Saved the World,” which has just been released on Amazon, iTunes, and GooglePlay.

The threat that we might over-trust algorithms with nuclear weapons could grow as we add more AI features to nuclear systems, as was discussed in this month’s FLI podcast. Petrov’s FLI award was also covered in Vox, Daily Mail, Engineering 360, and The Daily Star.


European Parliament Passes Resolution Supporting a Ban on Killer Robots
By Ariel Conn

The European Parliament passed a resolution on September 12, 2018 calling for an international ban on lethal autonomous weapons systems (LAWS). The resolution was adopted with 82% of the members voting in favor of it.

Among other things, the resolution calls on its Member States and the European Council “to develop and adopt, as a matter of urgency … a common position on lethal autonomous weapon systems that ensures meaningful human control over the critical functions of weapon systems, including during deployment.”

Also mentioned in the resolution were the many open letters signed by AI researchers and scientists from around the world, who are calling on the UN to negotiate a ban on LAWS.

Listen: Podcast with Will MacAskill

Moral Uncertainty and the Path to AI Alignment with William MacAskill
with William MacAskill


How are we to make progress on AI alignment given moral uncertainty? What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty?

In this podcast, Lucas spoke with William MacAskill. Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues in ethics.

Topics discussed in this episode include:

  • Will’s current normative and metaethical credences
  • How we ought to practice AI alignment given moral uncertainty
  • Moral uncertainty in preference aggregation
  • Idealizing persons and their preferences
  • The most neglected portion of AI alignment
To listen to the podcast, click here, or find us on SoundCloud, iTunes, Google Play and Stitcher.

AI and Nuclear Weapons: Trust, Accidents, and New Risks
with Paul Scharre and Mike Horowitz


On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.

Topics discussed in this episode include:

  • The sophisticated military robots developed by Soviets during the Cold War
  • How technology shapes human decision-making in war
  • “Automation bias” and why having a “human in the loop” is trickier than it sounds
  • The United States’ stance on automation with nuclear weapons
  • Why weaker countries might have more incentive to build AI into warfare
  • “Deep fakes” and other ways AI could sow instability and provoke crisis
  • The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
You can listen to the podcast here, and check us out on SoundCloud, iTunes, Google Play, and Stitcher.

AI Safety Research Highlights





Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich
By Jolene Creighton

Thomas G. Dietterich, Emeritus Professor of Computer Science at Oregon State University, explains that solving this identification problem begins with ensuring that our AI systems aren’t too confident — that they recognize when they encounter a foreign object and don’t misidentify it as something that they are acquainted with.


What We’ve Been Up to This Month


Max Tegmark, Lucas Perry, and Ariel Conn all helped present the Future of Life Award to Stanislav Petrov’s children.

Max Tegmark spent much of the past month giving various talks about AI safety in South Korea, Japan & China.

Ariel Conn attended the Cannes Corporate Media and TV Awards where Slaughterbots was awarded the Gold prize (see picture). She also participated in a panel discussion about AI, jobs, and ethics for an event in Denver hosted by the Littler Law Firm.

FLI in the News


VOX: 35 years ago today, one man saved us from world-ending nuclear war

FORBES: Let’s Talk About AI Ethics; We’re On A Deadline

NEW YORK POST: How artificial intelligence will change every aspect of our lives

DAILY MAIL: Man who ‘saved the world’: Russia’s Stanislav Petrov is FINALLY given award 35 years after he recognized US ‘nuke attack’ was a false alarm and refused to retaliate

BROOKINGS: The role of corporations in addressing AI’s ethical dilemmas

ENGINEERING 360: Future of Life Institute Grants Award to Late Soviet Officer for ‘Helping Avert WWIII’


If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.

Highlighted opportunity: 
The Centre for the Study of Existential Risk (CSER) invites applications for an Academic Programme Manager.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram