Skip to content
All Newsletters

FLI March, 2019 Newsletter

Published
9 April, 2019
Author
Revathi Kumar

Contents

FLI March, 2019 Newsletter

Lethal Autonomous Weapons: Video, Editorial, & Global Health Open Letter

Video: Why We Should Ban Lethal Autonomous Weapon

Some of the world’s leading AI researchers came together in one short video to explain some of their reasons for supporting a ban on lethal autonomous weapons. Yoshua Bengio, Meia Chita-Tegmark, Carla Gomes, Laura Nolan, Stuart Russell, Bart Selman, Toby Walsh, and Meredith Whittaker talk about why we can’t allow lethal autonomous weapons to be developed. Special thanks to Joseph Gordon-Levitt for narrating the video!

Editorial: It’s Not Too Late to Stop This New and Potentially Catastrophic Force


“Preventing harm is a key principle of all medical endeavour and an essential area of expertise for all healthcare professionals. The medical community has a history of successful advocacy for weapons bans, is well equipped to detail the humanitarian effects of weapon use, understands the dangers associated with automation, and is experienced in promoting prevention. As we continue to work towards the elimination of nuclear weapons, we must also support efforts to publicise the potentially catastrophic humanitarian consequences of autonomous weapons and help ensure that the full automation of lethal harm is prevented for ever.”

This editorial, co-authored by Emilia Javorsky of Scientists Against Inhumane Weapons, Ira Helfand of International Physicians for the Prevention of Nuclear War, and Max Tegmark of FLI, was published in the BMJ, a leading peer-reviewed medical journal.

Open Letter: From the Global Health Community


“As healthcare professionals, we believe that breakthroughs in science have tremendous potential to benefit society and should not be used to automate harm. We therefore call for an international ban on lethal autonomous weapons.”

Health professionals from around the world are expressing their support for a ban on lethal autonomous weapons by signing our open letter. Add your signature here.

FLI Podcast: Why Ban Lethal Autonomous Weapons?

Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. 

We’ve compiled their arguments, along with many of our own, and now, we want to turn the discussion over to you. We’ve set up a comments section on the podcast page, and we want to know: Which argument(s) do you find most compelling? Why?

From the UN Convention on Conventional Weapons

Women for the Future


This Women’s History Month, FLI has been celebrating with Women for the Future, a campaign to honor the women who’ve made it their job to create a better world for us all. The field of existential risk mitigation is largely male-dominated, so we wanted to emphasize the value –– and necessity –– of female voices in our industry. We profiled 34 women we admire, and got their takes on what they love (and don’t love) about their jobs, what advice they’d give women starting out in their fields, and what makes them hopeful for the future.

These women do all sorts of things. They are researchers, analysts, professors, directors, founders, students. One is a state senator; one is a professional poker player; two are recipients of the Nobel Peace Prize. They work on AI, climate change, robotics, disarmament, human rights, and more. What ultimately brings them together is a shared commitment to the future of humanity. Learn more about the campaign here.

More February Highlights




In the latest episode of our AI Alignment series, host Lucas Perry is joined by Geoffrey Irving, a member of OpenAI’s AI safety team. At OpenAI, debate is being explored as an AI alignment methodology for reward learning (learning what humans want) and is a part of their scalability efforts (how to train/evolve systems to safely solve questions of increasing complexity). Irving discusses properties of debate and synergies with machine learning that may make it a powerful truth seeking process on the path to beneficial AGI.




Though today’s AI systems can improve their own abilities in limited ways, any fundamental changes still require human input. But it is theoretically possible to create an AI system that is capable of true self-improvement, and many researchers believe this could be a path to artificial general intelligence (AGI). Ramana Kumar, an AGI safety researcher at DeepMind, explains what it means for an AI to truly self-improve, and why self-improving AI might be the key to developing AGI.




Self-improving AI could aid in the development of AGI, but it would also raise problems of its own.  A self-improving AI system can be viewed as two distinct agents: the “parent” agent and the “child” agent into which the parent self-modifies. In order to ensure the safety of such a system, it is necessary to ensure the safety of every possible child agent that might originate from the parent. Ramana Kumar breaks down this problem and discusses potential solutions.

What We’ve Been Up to This Month


Ariel Conn participated in a panel and hosted a workshop at the Campaign to Stop Killer Robots campaigners’ meeting. She attended the UN CCW meeting on lethal autonomous weapons, where she presented FLI’s lethal autonomous weapons video (watch above!). She also presented a statement to the UN in favor of a ban on lethal autonomous weapons.

Anthony Aguirre, Ariel Conn, Max Tegmark, Meia Chita-Tegmark, Lucas Perry, and Tucker Davey all participated in the Augmented Intelligence Summit, an event co-sponsored by FLI.

FLI in the News


FORBES: The Growing Marketplace for AI Ethics



We’re new to instagram! Please check out our profile and give us a follow: @futureoflifeinstitute

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024

Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk

Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
30 August, 2024

Future of Life Institute Newsletter: New $4 million grants program!

Mitigating AI-driven power concentration, Pindex and FLI collaboration, announcing our newest grantees and their projects, and more.
1 August, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram