Contents
FLI March, 2019 Newsletter
Lethal Autonomous Weapons: Video, Editorial, & Global Health Open Letter
Video: Why We Should Ban Lethal Autonomous Weapon
Editorial: It’s Not Too Late to Stop This New and Potentially Catastrophic Force
“Preventing harm is a key principle of all medical endeavour and an essential area of expertise for all healthcare professionals. The medical community has a history of successful advocacy for weapons bans, is well equipped to detail the humanitarian effects of weapon use, understands the dangers associated with automation, and is experienced in promoting prevention. As we continue to work towards the elimination of nuclear weapons, we must also support efforts to publicise the potentially catastrophic humanitarian consequences of autonomous weapons and help ensure that the full automation of lethal harm is prevented for ever.”
This editorial, co-authored by Emilia Javorsky of Scientists Against Inhumane Weapons, Ira Helfand of International Physicians for the Prevention of Nuclear War, and Max Tegmark of FLI, was published in the BMJ, a leading peer-reviewed medical journal.
Open Letter: From the Global Health Community
“As healthcare professionals, we believe that breakthroughs in science have tremendous potential to benefit society and should not be used to automate harm. We therefore call for an international ban on lethal autonomous weapons.”
Health professionals from around the world are expressing their support for a ban on lethal autonomous weapons by signing our open letter. Add your signature here.
FLI Podcast: Why Ban Lethal Autonomous Weapons?
We’ve compiled their arguments, along with many of our own, and now, we want to turn the discussion over to you. We’ve set up a comments section on the podcast page, and we want to know: Which argument(s) do you find most compelling? Why?
From the UN Convention on Conventional Weapons
Women for the Future
This Women’s History Month, FLI has been celebrating with Women for the Future, a campaign to honor the women who’ve made it their job to create a better world for us all. The field of existential risk mitigation is largely male-dominated, so we wanted to emphasize the value –– and necessity –– of female voices in our industry. We profiled 34 women we admire, and got their takes on what they love (and don’t love) about their jobs, what advice they’d give women starting out in their fields, and what makes them hopeful for the future.
These women do all sorts of things. They are researchers, analysts, professors, directors, founders, students. One is a state senator; one is a professional poker player; two are recipients of the Nobel Peace Prize. They work on AI, climate change, robotics, disarmament, human rights, and more. What ultimately brings them together is a shared commitment to the future of humanity. Learn more about the campaign here.
More February Highlights
In the latest episode of our AI Alignment series, host Lucas Perry is joined by Geoffrey Irving, a member of OpenAI’s AI safety team. At OpenAI, debate is being explored as an AI alignment methodology for reward learning (learning what humans want) and is a part of their scalability efforts (how to train/evolve systems to safely solve questions of increasing complexity). Irving discusses properties of debate and synergies with machine learning that may make it a powerful truth seeking process on the path to beneficial AGI.
Though today’s AI systems can improve their own abilities in limited ways, any fundamental changes still require human input. But it is theoretically possible to create an AI system that is capable of true self-improvement, and many researchers believe this could be a path to artificial general intelligence (AGI). Ramana Kumar, an AGI safety researcher at DeepMind, explains what it means for an AI to truly self-improve, and why self-improving AI might be the key to developing AGI.
Self-improving AI could aid in the development of AGI, but it would also raise problems of its own. Â A self-improving AI system can be viewed as two distinct agents: the “parent” agent and the “child” agent into which the parent self-modifies. In order to ensure the safety of such a system, it is necessary to ensure the safety of every possible child agent that might originate from the parent. Ramana Kumar breaks down this problem and discusses potential solutions.
What We’ve Been Up to This Month
Ariel Conn participated in a panel and hosted a workshop at the Campaign to Stop Killer Robots campaigners’ meeting. She attended the UN CCW meeting on lethal autonomous weapons, where she presented FLI’s lethal autonomous weapons video (watch above!). She also presented a statement to the UN in favor of a ban on lethal autonomous weapons.
Anthony Aguirre, Ariel Conn, Max Tegmark, Meia Chita-Tegmark, Lucas Perry, and Tucker Davey all participated in the Augmented Intelligence Summit, an event co-sponsored by FLI.
FLI in the News
FORBES: The Growing Marketplace for AI Ethics
We’re new to instagram! Please check out our profile and give us a follow: @futureoflifeinstitute