Skip to content
All Newsletters

FLI October 2021 Newsletter

Published
December 10, 2021
Author
Georgiana Gilgallon

Contents

FLI October 2021 Newsletter

FLI Launches Fellowships in AI Existential Safety

We’re thrilled to announce the launch of the Vitalik Buterin Fellowships in AI Existential Safety for PhD and postdoctoral students. The Fellowships will support research that analyses the ways in which artificial intelligence could cause an existential catastrophe, as well as research that serves to reduce this risk.

We are also accepting applications for our AI Existential Safety Community. If you’re an AI researcher, whether a Professor or early-career researcher, belonging to the Community offers a number of perks.

Applications for the PhD Fellowships are due on 29 October 2021.
Applications for the Postdoctoral Fellowships are due on 5 November 2021.
Applications for the Community are ongoing.


This is a global program: There are no geographic limitations on applicants or host universities.

Please help us spread the word and if AI Existential Safety isn’t for you, keep an eye out for the launch of other parts of our grants programme for existential risk reduction!

Policy & Outreach


FLI counsels against lethal autonomous weapons at the United Nations in Geneva

Emilia Javorsky and Mark Brakel represented FLI and the AI research community at last week’s session of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons in Geneva. The key message they sought to communicate is that “the consensus amongst technologists is clear and resounding: We are opposed to autonomous weapons that target humans.”

This was the penultimate meeting of the GGE ahead of the 6th Review Conference of the United Nations Convention on Certain Conventional Weapons in December, where states will have an opportunity to negotiate a legally binding instrument on lethal autonomous weapons.


There’s a new kid on the block; POLITICO Europe covers FLI’s EU advocacy efforts

A recent edition of Politico’s AI Decoded Newsletter included a feature on FLI’s advocacy work on the European Commission’s proposed AI Act. The newsletter covered two of our key recommendations for improving the Act: We recommend a ban on all forms of AI manipulation and that large language models should be added to the Act’s list of high-risk AI systems.

The Commission’s current proposal permits manipulative AI systems insofar as they’re unlikely to cause individuals physical or psychological harm but we “can’t think of a single case where subliminal manipulation would be a good thing.” Large language models – also known as “foundation models” – such as GPT-3 should be classed as high-risk because they need to be screened for safety and security flaws, including biases, before they proceed to market.

Read the newsletter here. To read our position paper on the EU AI Act, click here. To learn more about the AI Act, visit the dedicated website we recently launched.


FLI engages on AI Risk Management Framework in the US

FLI continues to advise the National Institute of Standards and Technology (NIST) in their development of guidance on artificial intelligence, including in the critically important AI Risk Management Framework. Our 
latest comments from this month on the Risk Management Framework raised numerous policy issues, including the need for NIST to account for aggregate risks from low probability, high consequence effects of AI systems, and the need to proactively ensure the alignment of evermore powerful advanced or general AI systems.

“When the world actually solved an environmental crisis”


The 2021 Future of Life Award, as well as the story of the ozone hole and its journey to recovery through the Montreal Protocol, were covered by VOX Future Perfect.

“The picture we’re left with by the fight to heal the ozone layer is that specific individuals played a huge role in changing humanity’s trajectory but they did that mostly by enabling public activism, international diplomacy, and collective action. In the fight to improve the world, we can’t do without individuals and we can’t do without coordination mechanisms. But we should keep in mind how much we can do when we have both.”

New Podcast Episodes


Filippa Lentzos on Global Catastrophic Biological Risks

Lucas Perry is joined by Dr. Filippa Lentzos, Senior Lecturer in Science and International Security at King’s College London, to discuss global catastrophic biological risks.

In particular, they delve into the most pressing issues in biosecurity, historical lab escapes, the ins and outs of gain of function research, and the benefits and risks of big data in the life sciences and the role it can play in biodefence. Filippa also discusses lessons from COVID-19 and the role of governance in biological risk.


Susan Solomon and Stephen Andersen on Saving the Ozone Layer

In this special episode of the FLI Podcast, Lucas speaks with our 2021 Future of Life Award winners about what Stephen Andersen describes as “science and politics at its best” — the scientific research that revealed ozone depletion and the work that went into the Montreal Protocol, which steered humanity away from the chemical compounds that caused it.

Among other topics, Susan Solomon discusses the inquiries and discoveries that led her to study the atmosphere above the Antarctic, and Stephen describes how together science and public pressure moved industry faster than the speed of politics. To wrap up, the two apply lessons learnt to today’s looming global threats, including climate change.

News & Reading


UK Government releases its National AI Strategy

The UK government has released its long-awaited National AI Strategy and given the field of catastrophic risk reduction cause for celebration; “The [UK] government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously…[It commits to working] with national security, defence and leading researchers to understand how to anticipate and prevent catastrophic risks.”


“First there was gunpowder. Then nuclear weapons. Next: artificially intelligent weapons.”

“Autonomous weapons are the AI application that most clearly and deeply conflicts with our morals and threatens humanity.”

This compelling Atlantic article sets out the case against lethal autonomous weapons 
and outlines several of the proposed solutions for avoiding an “existential disaster” including the solution advocated for by FLI and the AI research community, a prohibition on weapons “searching for, deciding to engage, and obliterating another human life, completely without human involvement.”

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024

Future of Life Institute Newsletter: Wrapping Up Our Biggest Year Yet

A provisional agreement is reached on the EU AI Act, highlights from the past year, and more.
December 22, 2023
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram