Skip to content
All Newsletters

FLI November 2021 Newsletter

Published
10 December, 2021
Author
Georgiana Gilgallon

Contents

FLI November 2021 Newsletter

Fellowships in AI Existential Safety Prove a Hit With Would-Be Researchers

The FLI Grantmakers are excited to have received almost 100 applications for our inaugural Fellowship programs. These applications were received from applicants hailing from 18 different countries.

Applications for the PhD and Postdoctoral Vitalik Buterin Fellowships in AI Existential Safety are currently being reviewed by our external review panel. These Fellowships will support research that analyses the ways in which artificial intelligence could cause an existential catastrophe, as well as research that serves to reduce this risk.

We look forward to announcing the recipients of these Fellowships in 2022, as well as additional grants programs for existential risk reduction!

Policy & Outreach


Nominations Open for the Future Of Life Award

We are pleased to announce that nominations are now open for the next Future of Life Award, a $50,000 prize given to under-recognized individuals who have helped make today’s world dramatically better than it may otherwise have been. You can nominate up to three candidates for the award, and if we decide to give the award to one or your nominees, you will receive a $3,000 prize from FLI for your contribution! Make your nomination here.

Previous recipients include Vasily Arkhipov, who in 1962 single-handedly prevented a global nuclear catastrophe from occurring, and Matthew Meselson, a driving force behind the 1972 Convention on the Prohibition of Biological Weapons. 
The 2021 Future of Life Award was given to Joseph Farman, Susan Solomon and Stephen Andersen (Solomon and Andersen pictured left) for their vital work in saving the earth’s ozone layer.


FLI orchestrates AI Researchers’ Letter in Major German Paper

On November 1st, Germany’s leading AI researchers urged politicians to take the helm of the international regulation of autonomous weapons that target humans, in a letter published in the Frankfurter Allgemeine Zeitung. The message to German politicians was clear: The AI research community opposes these weapons. The letter received widespread coverage in Germany and wider afield, including an interview on Germany’s main radio station, Deutschlandfunk Kultur.

An English translation of the letter, as well as the German original, can be found here.


U.S. Policy Team Continues Advocacy for AI Safety and Ethics at National Level

FLI was honored to support the nominations of several highly-qualified individuals to serve on the National Artificial Intelligence Advisory Committee (NAIAC) in the United States. The Secretary of Commerce will be making selections for the prestigious and important Advisory Committee, which advises the U.S. President and the National AI Initiative Office.

Our policy team continues to engage actively in the development of the National Institutes of Standards and Technology’s (NIST’s) creation of a Risk Management Framework, including participating in official workshops and coordinating with other interested civil society organizations.

Finally, FLI commends the National Science Foundation for their recent solicitation for a new National AI Research Institute on trustworthy AI. We continue our advocacy for increased (and improved) spending on research and development focused on the safety and ethics of AI systems in support of this and other initiatives.

New Podcast Episodes


Rohin Shah on the State of AGI Safety Research in 2021

Rohin Shah, Research Scientist at DeepMind’s technical artificial general intelligence (AGI) safety team, joins Lucas Perry to discuss AI value alignment, how an AI researcher might decide whether to work on AI safety, and why we can’t rule out AI systems causing existential damage.

Shah also reflects on the difference between unipolar versus multipolar scenarios, and the most important thing that he believes impacts the future of life.


Future of Life Institute’s $25M Grants Program for Existential Risk Reduction

In this special episode of the FLI Podcast, Future of Life Institute President Max Tegmark and the FLI grants team, Andrea Berman and Daniel Filan, join Lucas to announce our game-changing $25M multi-year existential risk reduction grants program, which begins with PhD and postdoctoral fellowships for AI existential safety research.

Max speaks about how receiving a grant changed his career in its early years. Andrea and Daniel lay out the details of the fellowships – including the application deadlines, which came to an end during the month – and discuss future grant priorities in other research areas revolving around global catastrophic and existential risk.

News & Reading


A Sobering Month: New Uses of Drones, and Further Developments in Autonomous Weapons

Just last week, an attack by a “small explosive-laden drone” failed to kill Iraqi PM Mustafa al-Kadhimi. However, like the Maduro drone attack in 2018, it demonstrated just how easy it is for powerful new technology to fall into criminal hands. With added autonomy, such a drone would have been harder to hold anyone accountable for, and much harder to defend against. al-Kadhimi would likely have died, Baghdad’s tense situation would have erupted, and Iraq’s latest chance at democratic stability might have been over. The threat of improvised drone weaponry looms before us.

Earlier in October, military hardware company Ghost Robotics debuted a robodog, with a sniper rifle attached. Ghost Robotics later denied that the robot was fully autonomous but it remains unclear just how far its autonomous capabilities extend. Leading AI researcher Toby Walsh said that he hoped the public outcry at the robodog would add ‘urgency to the ongoing discussions at the UN to regulate this space.’

Only weeks later, the Australian Army put in an order for the “Jaeger-C” (above left), a bulletproof attack robot vehicle with anti-tank and anti-personnel capabilities. Forbes reported that “a
utonomous operation… means the Jaeger-C will work even if jammed”. It also means that the vehicle can go fully autonomous, with no human in control of its actions. The field of autonomous weaponry is advancing faster than legislation can hold it to account.


COP26 Discussions Highlight the Need for Technological Innovation-based Solutions

Discussions at the 2021 United Nations Climate Change Conference (COP26) – especially in the second week of the conference – focused in part on the regulation of carbon dioxide emissions, and the use of carbon emissions trading for nations to meet their various pledges. Carbon trading, which gives nations permits to emit a certain amount of carbon dioxide, and sell their unused permits to other countries, has been criticised as a mere half-measure, or even a distraction, in the broader question of dealing with global warming. It is clear humanity will need a more impactful long-term solution than shaky pledges met only by trading. This WIRED piece argues 
we will need maximum innovation if we’re to move from carbon positive to carbon neutral, and from there to carbon negative.


Stuart Russell Rallies British Media Around Need for AI Regulation

Professor Stuart Russell, Founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, and a world-leading AI researcher, is giving this year’s BBC Reith Lectures in the United Kingdom. In a series entitled Living with Artificial Intelligence, Russell will cover such topics as “AI in Warfare” and “AI in the Economy”. In the build-up to the lectures, he gave this interview with The Guardian, in which he stated that AI experts were “spooked” by recent developments in the field, and compared these developments to those of the atom bomb.

Russell singled out military uses of AI as a particularly concerning area of development. He emphasised the threat from anti-personnel weapons: “Those are the ones that are very easily scalable, meaning you could put a million of them in a single truck and you could open the back and off they go and wipe out a whole city”. He hopes that the Reith lectures will help get the public “involved in those choices” about the direction we take going forward, explaining, “it’s the public who will benefit or not”. In their follow-up opinion column, The Guardian made it clear that they endorse Russell’s message, declaring, “AI needs regulating before it’s too late”. The Reith Lectures will broadcast on BBC Radio 4 and BBC World Service on December 8th, and will then be internationally available online, through BBC Sounds.

These could very quickly become weapons of mass destruction, but they’d be much less expensive and harder to restrict than nuclear bombs”.


“Military dominance in the future won’t be decided just by the size of a nation’s army, but the quality of its algorithms”


This alarming Axios article lays out the recent developments in what it describes as an “AI military race”, an arms race between the great powers – namely the U.S., China and Russia – to utilise cutting edge AI not only for intelligence gathering and analysis, but also in the development of autonomous weapons. The article cites FLI President Max Tegmark as a proponent of banning these weapons (with the above quotation on their capability as WMDs) and on the other side, Eric Schmidt’s NSCAI rejecting the notion of a ban.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024

Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk

Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
30 August, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram