Skip to content
All Newsletters

FLI February 2022 Newsletter

Published
18 February, 2022
Author
Will Jones

Contents

FLI February 2022 Newsletter

Apply to join our team!


The Future of Life Institute continues to grow as an organisation. We are currently recruiting for five positions:

  • EU Policy Analyst: This role will analyse how changes in the Brussels policy landscape affect the governance of artificial intelligence, and recommend appropriate actions. This is a full-time job, based in Brussels. The first application deadline is 20th February (the end of this week!). Apply here.
  • Podcast Host and Director: This person will be responsible for hosting, and supervising all aspects of producing, editing, releasing and promoting the FLI podcast. We are looking for someone who can formulate and implement a vision for further improving the podcast’s quality and growing its audience. This is a full-time, remote role. For more information and to apply, click here.
  • Editorial Manager: This person will be responsible for ideating, creating (alone or in collaboration with others), testing and promoting content related to FLI’s goals of reducing large-scale, extreme risks from transformative technologies, as well as steering the development and use of these technologies to benefit life. This is a full-time, remote role. For more information, and to apply, click her
  • Human Resources Manager: This person will help manage various aspects of FLI’s employee and contractor relationships. The role will be remote for the foreseeable future. A US or EU location is preferable and a potential willingness to relocate in the future is desirable. Apply here.
  • Operations Specialist: This person will help develop highly effective tools, workflows, and strategies for operations at FLI and other nonprofit organisations. Applications are for a standalone 6-month project but the role could evolve into a longer-term position at FLI or another nonprofit organisation. Work will be primarily remote but a Bay Area physical location is a mild plus. Find out more and apply here. 

The application deadline for the EU Policy Analyst role is 20th February (the end of this week!). The rest of the positions have their deadline next week, on the 25th February, but all of these are rolling applications, until the positions are filled. This page – and this newsletter – will always keep you updated on the latest FLI openings.

Policy and Outreach Efforts


Need For Greater Protection Against AI Manipulation

In this EURACTIV column, Future of Life Institute Policy Researcher Risto Uuk joins EDRi and Amnesty International in calling for improvements to the EU Artificial Intelligence Act with regards to the manipulatory risks of AI.

The risks of manipulation from AI systems aren’t merely hypothetical‘, Risto points out; ‘they already threaten individuals and communities and can lead to further harms if not adequately prepared for.‘ He recommends two specific changes to the AI Act: Firstly, that the definition of ‘manipulation’ be expanded beyond just ‘subliminal techniques’, because ‘most uses of AI will not be subliminal’, and these ‘consciously perceived’ stimuli are currently unregulated; secondly, that ‘societal harms’ be added to the list of harms to be regulated by Article 5 of the Act. That way, dangers that extend beyond the individual, such as damage to the democratic process or the exacerbation of inequality, can also be accounted for.

To learn more about the AI Act and its developments, visit our AI Act site, or subscribe to Risto’s EU AI Act newsletter.

New Podcast Episodes

What is ‘worldbuilding’ and why does it matter?

In this new Future of Life Institute Podcast episode, FLI Executive Vice President Anthony Aguirre and Project Coordinator Anna Yelizarova discuss all things related to the FLI Worldbuilding Contest. They explain the motivations behind the contest and the importance of worldbuilding more generally – stressing the need for positive visions of the future, in a media world saturated with dystopia – before going onto the specific rules, submission requirements and relevant dates (as well as the prizes!) of our particular contest.

Contest submissions are due on 15th April 2022. If you’d like to attend a worldbuilding workshop or you need help finding team members, visit this page.  For those simply wanting to look more deeply into the contest, and think about entering, you can’t go wrong with this site.

David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy

Philosopher and cognitive scientist David Chalmers spoke to Lucas Perry about his new book, Reality+: Virtual Worlds and the Problems of Philosophy. He lays out the main theses of the book: His belief that virtual reality is genuine reality, and what this means for metaphysics (that the objects in virtual reality are real objects) and values (that you can lead a valuable life inside a virtual world). If you’re intrigued by these claims – perhaps you’d like to hear Chalmers justify them, or explore their broader implications – you can listen to the complete episode here.

Meanwhile, it’s a farewell to Lucas, who will be moving on from FLI very soon. We wish him all the best for the future!

News & Reading

Doomsday Clock Still Perilously Close to Midnight

On the 20th January, the Bulletin of the Atomic Scientists announced the updated time on the Doomsday Clock. On the clock, the distance from midnight symbolises how far the Bulletin’s panel of scientists judge humanity to be from self-annihilation. Since 1947, the clock has urged nations to ‘reverse the hands’ – as they did with partial nuclear disarmament programmes up to 1991. In 2021, with climate change, pandemics, AI and a resurgent nuclear threat, we were 100 seconds off. This year we remain at 100 seconds to midnight. But as Hank Green pointed out on the video livestream accompanying the announcement, this constancy represents not so much stasis, as a balance of new improvements and many new risk developments.

Robodog is Back; The Onion Takes Aim

FLI readers will recall Ghost Robotics’ ‘robodog’ causing alarm among experts and the broader public back in October, when it debuted with a sniper rifle attached. Well now, as The Verge reports, it returns. Despite widespread condemnation of that debut, the ‘robodog’ proceeds to march on through US institutions, now being tested by the Department of Homeland Security (DHS) on the Mexican border. It is currently pictured (right) and discussed by the DHS as remaining ‘unarmed’. But how long can we expect such an asset to remain so, in the turbulent frontiers of the southern US border? More importantly, FLI asks, how long do we have to wait until governments and international bodies realise the pressing need for regulation?

Luckily, it appears that media voices further afield are finally beginning to point out the absurdity of these developments. The Onion ran a piece last week, lambasting in their characteristic way the arguments repeatedly wheeled out in defence of the robodog and its not-so-furry friends. Nothing quite like the blade of satire to cut through the facade of innocence.

Separating Narrative From Fact and Speculation From Science

Do you ever wish that news media were more reliable? Then you may enjoy two free resources that Anthony Aguirre and Max Tegmark have launched, bringing ideas from scientific truth-finding to the media ecosystem:

1) Improve the News is a news aggregator that reveals bias in articles. It uses machine-learning to group articles about the same story, and separate the facts (that all articles agree on) from the various narratives.

2) Metaculus is a prediction site. Users give their own predictions for certain hypotheticals; then, following the scientific notion of trust being earned on the basis of successful past predictions – predictions for facts relevant to news stories (which you’ll find linked in Improve the News as the “Nerd narrative”) are weighted toward the most successful community predictors. In this example, you’ll find a 17% chance of the control problem being solved before artificial general intelligence (AGI) is invented, based on the community prediction.

The Centre for the Study of Existential Risk is also hiring!

The Centre for the Study of Existential Risk (CSER) in Cambridge are looking for a Senior Research Associate to join their ‘AI: Futures and Responsibility’ programme, exploring AI risk scenarios, forecasting AI progress, and recommending safer AI governance. CSER invites applicants with expertise in the long-term impacts, risks, and governance of AI. This post offers ‘a unique opportunity to help lead an ambitious group of researchers who are highly motivated to have a real impact on the safe and beneficial development of AI’.

The application deadline is the 27th February. Find out more here.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024

Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk

Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
30 August, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram