Skip to content
All Newsletters

FLI August, 2020 Newsletter

Published
5 August, 2020
Author
Anna Yelizarova

Contents

FLI August, 2020 Newsletter

Lethal Autonomous Weapons Systems, Nuclear Testing & More

New Resource: Lethal Autonomous Weapons Systems


Described as the third revolution in warfare after gunpowder and nuclear weapons, lethal autonomous weapons are systems that can identify, select and engage a target without meaningful human control. Many semi-autonomous weapons in use today rely on autonomy for certain parts of their system but have a communication link to a human that will approve or make decisions. In contrast, a fully-autonomous system could be deployed without any established communication network and would independently respond to a changing environment and decide how to achieve its pre-programmed goals. The ethical, political and legal debate underway has been around autonomy in the use of force and the decision to take a human life.

Lethal AWS may create a paradigm shift in how we wage war. They would allow highly lethal systems to be deployed in the battlefield that cannot be controlled or recalled once launched. Unlike any weapon seen before, they could also allow for the selective targeting of a particular group based on parameters like age, gender, ethnicity or political leaning (if such information was available). Because lethal AWS would greatly decrease personnel cost and could be easy to obtained at low cost (like in the case of small drones), small groups of people could potentially inflict disproportionate harm, making lethal AWS a new class of weapon of mass destruction.

There is an important conversation underway in how to shape the development of this technology and where to draw the line in the use of lethal autonomy. Check out FLI’s new lethal autonomous weapons systems page for an overview of the issue, plus the following resources:





Nuclear Testing


Video: Will More Nuclear Explosions Make Us Safer?

On August 6th and 9th, 1945, the United States dropped nuclear bombs on the Japanese cities of Hiroshima and Nagasaki. To this day, these remain the only uses of nuclear weapons in armed conflict. As we mark the 75th anniversary of the bombings this month, scientists are speaking up against the US administration’s interest in restarting nuclear testing. Watch here.


Open Letter: Uphold the Nuclear Weapons Test Moratorium

Scientists have come together to speak out against breaking the nuclear test moratorium in an open letter published in Science magazine. Read here.

AI Ethics


Podcast: Peter Railton on Moral Learning and Metaethics in AI Systems

From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics. Listen here.

FLI in the News

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024

Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk

Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
30 August, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram