Skip to content
All Newsletters

FLI August, 2020 Newsletter

Published
August 5, 2020
Author
Anna Yelizarova

Contents

FLI August, 2020 Newsletter

Lethal Autonomous Weapons Systems, Nuclear Testing & More

New Resource: Lethal Autonomous Weapons Systems


Described as the third revolution in warfare after gunpowder and nuclear weapons, lethal autonomous weapons are systems that can identify, select and engage a target without meaningful human control. Many semi-autonomous weapons in use today rely on autonomy for certain parts of their system but have a communication link to a human that will approve or make decisions. In contrast, a fully-autonomous system could be deployed without any established communication network and would independently respond to a changing environment and decide how to achieve its pre-programmed goals. The ethical, political and legal debate underway has been around autonomy in the use of force and the decision to take a human life.

Lethal AWS may create a paradigm shift in how we wage war. They would allow highly lethal systems to be deployed in the battlefield that cannot be controlled or recalled once launched. Unlike any weapon seen before, they could also allow for the selective targeting of a particular group based on parameters like age, gender, ethnicity or political leaning (if such information was available). Because lethal AWS would greatly decrease personnel cost and could be easy to obtained at low cost (like in the case of small drones), small groups of people could potentially inflict disproportionate harm, making lethal AWS a new class of weapon of mass destruction.

There is an important conversation underway in how to shape the development of this technology and where to draw the line in the use of lethal autonomy. Check out FLI’s new lethal autonomous weapons systems page for an overview of the issue, plus the following resources:





Nuclear Testing


Video: Will More Nuclear Explosions Make Us Safer?

On August 6th and 9th, 1945, the United States dropped nuclear bombs on the Japanese cities of Hiroshima and Nagasaki. To this day, these remain the only uses of nuclear weapons in armed conflict. As we mark the 75th anniversary of the bombings this month, scientists are speaking up against the US administration’s interest in restarting nuclear testing. Watch here.


Open Letter: Uphold the Nuclear Weapons Test Moratorium

Scientists have come together to speak out against breaking the nuclear test moratorium in an open letter published in Science magazine. Read here.

AI Ethics


Podcast: Peter Railton on Moral Learning and Metaethics in AI Systems

From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics. Listen here.

FLI in the News

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram