Skip to content
All Newsletters

FLI November, 2016 Newsletter

Published
December 2, 2016
Author
admin

Contents

FLI November, 2016 Newsletter

Liability and the Definitional Problem
of Autonomous Weapons

multiple_drones_autonomous_weaponsIf you haven’t already seen it, the FLI website has a page dedicated to the AI safety researchers, their publications, and their many other achievements. We’ve also been featuring their individual projects in articles and a podcast, with special attention to autonomous weapons this month. Peter Asaro and Heather Roff sat down with FLI’s Ariel Conn to discuss some of the biggest problems we currently face with respect to autonomous weapons. Specifically, who’s liable if something goes wrong, and how can we have a real discussion about the risks of weapons that aren’t well defined?



What, exactly, is an autonomous weapon? For the general public, the phrase is often used synonymously with killer robots and triggers images of the Terminator. But for the military, the definition of an autonomous weapons system, or AWS, is rather nebulous.
Research by Heather Roff.




Who Is Responsible for Autonomous Weapons?
By Tucker Davey

As artificial intelligence improves, governments may turn to autonomous weapons — like military robots — in order to gain the upper hand in armed conflict. How do we hold human beings accountable for the actions of autonomous systems? And how is justice served when the killer is essentially a computer?
Research by Peter Asaro.





Heather Roff and Peter Asaro talk about their work to understand and define the role of autonomous weapons, the problems with autonomous weapons, and why the ethical issues surrounding autonomous weapons are so much more complicated than other AI systems.



Don’t forget to follow us on SoundCloud and iTunes!


What We’ve Been Up to This Month




Effective Altruism Global x Oxford 

Victoria Krakovna participated in EAGx Oxford this month. She was on a panel — which included Demis Hassabis, Nate Soares, Toby Ord and Murray Shanahan — to discuss the long-term situation in AI. She also gave a talk about the story of FLI.


ICYMI: This Month’s Most Popular Articles




Complex AI Systems Explain Their Actions
By Sarah Marquart

As AI systems become more integrated into daily life, it is vital to find an efficient way to communicate with them. It is obviously more natural for a human to speak in plain language rather than a string of code. Further, as the relationship between humans and robots grows, it will be necessary to engage in conversations, rather than just give orders.
Research by Manuela Veloso.






Cybersecurity and Machine Learning
By Tucker Davey

When it comes to cybersecurity, no nation can afford to slack off. If a nation’s defense systems cannot anticipate how an attacker will try to fool them, then an especially clever attack could expose military secrets or use disguised malware to cause major networks to crash.
But what if defense systems could learn to anticipate how attackers will try to fool them?
Research by Ben Rubinstein.





Insight from the Dalai Lama Applied to AI Ethics
By Ariel Conn

One of the primary objectives — if not the primary objective — of artificial intelligence is to improve life for all people. But an equally powerful motivator to create AI is to improve profits. These two goals can occasionally be at odds with each other. A solution may lie in the “need to be needed” expressed by the Dalai Lama in a New York Times Op-ed.





2300 Scientists from All Fifty States Pen Open Letter to Incoming Trump Administration
By The Union of Concerned Scientists

More than 2300 scientists from all fifty states, including 22 Nobel Prize recipients, released an open letter urging the Trump administration and Congress to set a high bar for integrity, transparency and independence in using science to inform federal policies.


Note From FLI:


We’re also looking for more writing volunteers. If you’re interested in writing research news articles for us, please let us know.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024

Future of Life Institute Newsletter: Wrapping Up Our Biggest Year Yet

A provisional agreement is reached on the EU AI Act, highlights from the past year, and more.
December 22, 2023
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram