FLI February 2022 Newsletter
Apply to join our team!
The Future of Life Institute continues to grow as an organisation. We are currently recruiting for five positions:
- EU Policy Analyst: This role will analyse how changes in the Brussels policy landscape affect the governance of artificial intelligence, and recommend appropriate actions. This is a full-time job, based in Brussels. The first application deadline is 20th February (the end of this week!). Apply here.
- Podcast Host and Director: This person will be responsible for hosting, and supervising all aspects of producing, editing, releasing and promoting the FLI podcast. We are looking for someone who can formulate and implement a vision for further improving the podcast’s quality and growing its audience. This is a full-time, remote role. For more information and to apply, click here.
- Editorial Manager: This person will be responsible for ideating, creating (alone or in collaboration with others), testing and promoting content related to FLI’s goals of reducing large-scale, extreme risks from transformative technologies, as well as steering the development and use of these technologies to benefit life. This is a full-time, remote role. For more information, and to apply, click her
- Human Resources Manager: This person will help manage various aspects of FLI’s employee and contractor relationships. The role will be remote for the foreseeable future. A US or EU location is preferable and a potential willingness to relocate in the future is desirable. Apply here.
- Operations Specialist: This person will help develop highly effective tools, workflows, and strategies for operations at FLI and other nonprofit organisations. Applications are for a standalone 6-month project but the role could evolve into a longer-term position at FLI or another nonprofit organisation. Work will be primarily remote but a Bay Area physical location is a mild plus. Find out more and apply here.
The application deadline for the EU Policy Analyst role is 20th February (the end of this week!). The rest of the positions have their deadline next week, on the 25th February, but all of these are rolling applications, until the positions are filled. This page – and this newsletter – will always keep you updated on the latest FLI openings.
Policy and Outreach Efforts
In this EURACTIV column, Future of Life Institute Policy Researcher Risto Uuk joins EDRi and Amnesty International in calling for improvements to the EU Artificial Intelligence Act with regards to the manipulatory risks of AI.
‘The risks of manipulation from AI systems aren’t merely hypothetical‘, Risto points out; ‘they already threaten individuals and communities and can lead to further harms if not adequately prepared for.‘ He recommends two specific changes to the AI Act: Firstly, that the definition of ‘manipulation’ be expanded beyond just ‘subliminal techniques’, because ‘most uses of AI will not be subliminal’, and these ‘consciously perceived’ stimuli are currently unregulated; secondly, that ‘societal harms’ be added to the list of harms to be regulated by Article 5 of the Act. That way, dangers that extend beyond the individual, such as damage to the democratic process or the exacerbation of inequality, can also be accounted for.
New Podcast Episodes
In this new Future of Life Institute Podcast episode, FLI Executive Vice President Anthony Aguirre and Project Coordinator Anna Yelizarova discuss all things related to the FLI Worldbuilding Contest. They explain the motivations behind the contest and the importance of worldbuilding more generally – stressing the need for positive visions of the future, in a media world saturated with dystopia – before going onto the specific rules, submission requirements and relevant dates (as well as the prizes!) of our particular contest.
Contest submissions are due on 15th April 2022. If you’d like to attend a worldbuilding workshop or you need help finding team members, visit this page. For those simply wanting to look more deeply into the contest, and think about entering, you can’t go wrong with this site.
News & Reading
On the 20th January, the Bulletin of the Atomic Scientists announced the updated time on the Doomsday Clock. On the clock, the distance from midnight symbolises how far the Bulletin’s panel of scientists judge humanity to be from self-annihilation. Since 1947, the clock has urged nations to ‘reverse the hands’ – as they did with partial nuclear disarmament programmes up to 1991. In 2021, with climate change, pandemics, AI and a resurgent nuclear threat, we were 100 seconds off. This year we remain at 100 seconds to midnight. But as Hank Green pointed out on the video livestream accompanying the announcement, this constancy represents not so much stasis, as a balance of new improvements and many new risk developments.
Robodog is Back; The Onion Takes Aim
FLI readers will recall Ghost Robotics’ ‘robodog’ causing alarm among experts and the broader public back in October, when it debuted with a sniper rifle attached. Well now, as The Verge reports, it returns. Despite widespread condemnation of that debut, the ‘robodog’ proceeds to march on through US institutions, now being tested by the Department of Homeland Security (DHS) on the Mexican border. It is currently pictured (right) and discussed by the DHS as remaining ‘unarmed’. But how long can we expect such an asset to remain so, in the turbulent frontiers of the southern US border? More importantly, FLI asks, how long do we have to wait until governments and international bodies realise the pressing need for regulation?
Luckily, it appears that media voices further afield are finally beginning to point out the absurdity of these developments. The Onion ran a piece last week, lambasting in their characteristic way the arguments repeatedly wheeled out in defence of the robodog and its not-so-furry friends. Nothing quite like the blade of satire to cut through the facade of innocence.
Separating Narrative From Fact and Speculation From Science
Do you ever wish that news media were more reliable? Then you may enjoy two free resources that Anthony Aguirre and Max Tegmark have launched, bringing ideas from scientific truth-finding to the media ecosystem:
1) Improve the News is a news aggregator that reveals bias in articles. It uses machine-learning to group articles about the same story, and separate the facts (that all articles agree on) from the various narratives.
The Centre for the Study of Existential Risk is also hiring!
The Centre for the Study of Existential Risk (CSER) in Cambridge are looking for a Senior Research Associate to join their ‘AI: Futures and Responsibility’ programme, exploring AI risk scenarios, forecasting AI progress, and recommending safer AI governance. CSER invites applicants with expertise in the long-term impacts, risks, and governance of AI. This post offers ‘a unique opportunity to help lead an ambitious group of researchers who are highly motivated to have a real impact on the safe and beneficial development of AI’.
The application deadline is the 27th February. Find out more here.
FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.
FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.