Skip to content
All Newsletters

FLI Summer 2021 Newsletter

Published
28 September, 2021
Author
Eryk

Contents

FLI Summer 2021 Newsletter

FLI’s Take on the EU AI Act

How you understand risk may differ from how your neighbors understand it. But when threats appear, it’s critical for everyone to agree — and act. That’s what’s driving our work on the European Union’s AI Act, defined as “one of the first major policy initiatives worldwide focused on protecting people from harmful AI” in a recent article in Wired magazine.

The article references our work and priorities in the EU: With the very definition of “High Risk” under negotiation, we’re making the case that the threshold for what counts as “subliminal manipulation” should be lowered — and should include addictive adtech, which contributes to misinformation, extremism and, arguably, poor mental health.

The European Commission is the first major regulator in the world to propose a law on AI and will ultimately set policy for the EU’s 27 member states. FLI has submitted its feedback on this landmark act, which you can read here. Our top recommendations include:

  • Ban any and all AI manipulation that adversely impacts fundamental rights or seriously distorts human decision-making.
  • Ensure AI providers consider the social impact of their systems — because applications that do not violate individual rights may nonetheless have broader societal consequences.
  • Require a complete risk assessment of AI systems, rather than classifying entire systems by a single use. The current proposal, for example, would regulate an AI that assesses students’ performance, but would have nothing to say when that same AI offers biased recommendations in educational materials.


Taken together, there are 10 recommendations that build on FLI’s foundational Asimolar Principles for AI governance.  

Policy & Outreach Efforts

How do you prove that you’ve been harmed by an AI when you can’t access the data or algorithm that caused it? If a self-learning AI causes harm 11 years after the product was put on the market, should its producer be allowed to disavow liability? And can a car producer shift liability of an autonomous vehicle simply by burying a legal clause in lengthy terms and conditions?

FLI explored these and other questions in our response to the EU’s new consultation on AI liability. We argued that new rules are necessary to protect the rights of consumers and to encourage AI developers to make their products safer. You can download our full response here.


“Lethal Autonomous Weapons Exist; They Must Be Banned”

Following a recent UN report stating that autonomous weapons were deployed to kill Libyan National Army forces in 2020, Stuart Russell and FLI’s Max Tegmark, Emilia Javorsky and Anthony Aguirre co-authored an article in IEEE Spectrum calling for an immediate moratorium on the development, deployment, and use of lethal autonomous weapons.

Future of Life Institute set to launch $25 million grant program for Existential Risk Reduction


FLI intends to launch its $25M grants program in the coming weeks! This program will focus on reducing existential risks, events that could cause the permanent collapse of civilization or even human extinction.

Watch this space!

New Podcast Episodes


Michael Klare on the Pentagon’s view of Climate Change and the Risk of State Collapse

The US Military views climate change as a leading threat to national security, says Michael Klare. On this episode of the FLI Podcast, Klare, the Five College Professor of Peace & World Security Studies, discussed the Pentagon’s strategy for adapting to this emergent threat.

In the interview, Klare notes that climate change has already done “tremendous damage” to US military bases across the Gulf of Mexico. Later, he discusses how global warming is driving new humanitarian crises that the military must respond to. Also of interest: the military’s view of climate change as a “threat multiplier,” a complicating factor in the complex web of social, economic, and diplomatic tensions that could heighten the probability of armed conflict.


Avi Loeb on Oumuamua, Aliens, Space Archeology, Great Filters and Superstructures

Oumuamua, an object with seemingly unnatural properties, appeared from beyond our solar system in 2017. Its appearance raised questions – and controversial theories – about where it came from. In this episode of the FLI Podcast, Avi Loeb, Professor of Science at Harvard University, shared theories of Oumuamua’s origins — and why science sometimes struggles to explore extraordinary events.

Loeb describes the common properties of space debris – “bricks left over from the construction project of the solar system” – and what he finds so unique about Oumuamua among these celestial objects. He shares why many mainstream theories don’t satisfy him, and the history of scientists investigating challenging questions back to the days of Copernicus.


A new preliminary report from the US Office of the Director of National Intelligence reported 144 cases of what it called “unidentified aerial phenomena” — a new phrase for UFOs. In this bonus episode, Lucas continues his conversation with Avi Loeb to discuss the importance of this report and what it means for science and the search for extraterrestrial intelligence.

News & Reading

The Centre for the Study of Existential Risk is an interdisciplinary research centre at the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilizational collapse.

They are seeking a Senior Research Associate / Academic Programme Manager to play a central role in the operation and delivery of research programmes; including the management of major research projects, line management of postdoctoral researchers, strategic planning, and fundraising.

For consideration, apply by 20 September.

How the U.S. Military can Fight the ‘Existential Threat’ of Climate Change

After the US Secretary of Defense called climate change “a profoundly destabilizing force for our world,” our recent podcast guest Michael Klare penned an Op-Ed in the LA Times.Klare, the Five College Professor of Peace & World Security Studies, calls on the Pentagon to outline specific actions that would lead to “far greater reductions in fossil fuel use and greenhouse gas emissions,” including allocating research funds to green technologies.

Rain Observed at the Summit of Greenland Ice Sheet for the First Time

Rain was reported in area that has only seen temperatures above freezing three times in recorded history. Rain on the ice sheet, which is 10,551 feet above sea level, is warmer than the ice, creating conditions for melting water to run off, or re-freeze.

A recent UN report has suggested that sustained global temperatures beyond 2 degrees Celsius would lead to the total collapse of the ice sheet. The presence of rain could accelerate a melt-off already underway, eventually elevating sea levels by as much as 23 feet.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram