Skip to content
All Newsletters

FLI May 2022 Newsletter

Published
June 1, 2022
Author
Will Jones

Contents

FLI May 2022 Newsletter

France backs major FLI position


Earlier this month, French officials announced their intention to regulate the ‘brain’ of AI systems. France, which currently holds the EU presidency, has proposed that the draft Artificial Intelligence Act be expanded to include ‘general purpose AI systems’ – large AI models that can perform a wide range of tasks and often provide the powerful core to more specific applications (see these links for some recent examples). As Clothilde Goujard put it in Politico Pro, the presidency aims to regulate the ‘brain’ of AI systems.

FLI has advocated for this change ever since the EU released its first draft of the AI Act last year. As we noted in Wired at the time, leaving general AI systems out of the Act’s scope would ‘allow increasingly transformative technologies to evade regulatory scrutiny.’ The current French text requires that builders of general purpose AI demonstrate appropriate levels of accuracy, robustness and cybersecurity, and maintain a risk management system.

Just prior to this development, FLI held a workshop on general purpose AI systems for EU parliamentary assistants, in partnership with Professor Lilian Edwards of the Ada Lovelace Institute; at the workshop, FLI’s Risto Uuk gave this presentation, explaining what general purpose AI systems are and why they should be regulated, and providing concrete proposals for how to achieve this. Yesterday’s Euractiv piece, cowritten by Risto and Kris Shrishak, also explained these arguments, this time with particular focus on why ‘The obligations of general purpose AI system providers should primarily fall on the developers’.

As VOX described in a recent article, large AI models often exhibit gender and racial bias. We believe that the proposed regulatory change can help mitigate current risks, and also engender a culture of safety as these systems grow increasingly powerful.

Other Policy and Outreach

FLI Worldbuilding Finalists Announced

The FLI Worldbuilding Contest reached a major milestone earlier this month: the list of twenty finalists was announced, along with five honourable mentions. The winner, chosen from the finalists, will be announced next month. In the meantime, you can explore the selected worldbuilds at this site. Each submission has a feedback form included on its page. We would love to hear your reactions to these potential futures; your important insights could even contribute to future FLI work. Your views will be shared with the FLI team, the judges and the worldbuild creators themselves. We can’t wait to hear which worlds you think are plausible, and we’re very curious to see which futures you regard as aspirational – everyone will have a different opinion, and we want to hear as many as possible.

FLI Looking for a Social Media Manager

The Future of Life Institute has a new job opening for the role of Social Media Manager. This person will represent the organisation across a range of social channels, and enhance our online presence. Working remotely and full-time as part of FLI’s growing Outreach Team, they will play a key role in the revamping of FLI’s communication strategy. More information, regarding role responsibilities, preferred qualifications, and how to apply, can be found here.

The ‘Podcast Host and Director’ position also remains on our rolling job applications board. This role will be responsible for hosting, and supervising all aspects of producing, editing, publishing and promoting the podcast. We are looking for someone who can formulate and implement a vision for further improving the podcast’s quality and growing its audience – read more about this role and apply here.

Max Tegmark Talks AI, Democracy and Regulation in Estonia

On 12th May, Max Tegmark joined in with the e-Governance Conference in Tallinn, Estonia, which focused on ‘resilient and seamless governance‘. First he presented a talk on ‘AI for Democracy’, advocating for digital self-determination in place of digital colonialism. Max argued that while recent discussion tends to focus on the digitalization of government services, focus needs to shift onto e-democracy and the empowerment of voters. He then took part in a panel conversation with Linnar Viik, an Estonian IT scientist, and Luukas Kristjan Iives, Government Chief Information Officer of the Estonian Government, on ‘The Balancing Act of Regulating AI’.

News & Reading

Steps Towards Safety for the AI Superpowers

Ryan Fedasiuk expressed, in Foreign Policy, his strong sense that the United States and China ‘take steps’ towards mitigating the escalatory risks posed by AI accidents. He noted that ‘Even with perfect info and ideal operating circumstances, AI systems break easily and perform in ways contrary to their intended function’. This is already true of ‘racially biased hiring decisions’; it may be disastrous in AI weapons systems.

The piece also discussed the lack of trust between China and the US on the testing and evaluation of their military AI systems. To improve diplomatic negotiations around AI safety, the article recommended three steps: 1. Clarify their current AI processes and principles. 2. The US and China need to ‘formalize a mechanism for crisis communication’ – these should ‘include direct channels between the White House and Chinese leadership compound in Zhongnanhai, as well as the Pentagon and Chinese Central Military Commission’. 3. Finally, and perhaps most pressingly, they should ‘Independently commit not to delegate nuclear launch authority to an AI system capable of online learning’. ‘Under no circumstances should a decision to launch a nuclear weapon be made by a system… “learning” about the rules of engagement in real time’. Above all, the piece emphasised that AI risks are too great for these two countries not to start building up mutual trust. We add that this is relevant beyond these two states: all nations should focus on mitigating the catastrophic risks associated with AI.

Carlos Ignacio Gutierrez on the NIST Framework

David Matthews in Science Business reports on the NIST plan to manage AI risk in the US. Maryland-based National Institute of Standards and Technology (NIST) is urging companies to adopt its AI Risk Management Framework, released in March. As Matthews puts it, ‘US AI guidelines are everything the EU’s AI Act is not: voluntary, non-prescriptive and focused on changing the culture of tech companies.’

FLI Policy Researcher
 Carlos Ignacio Gutierrez was consulted by Matthews to explain some of the ways the the framework could be improved upon. For instance, Gutierrez clarifies how the notion of ‘AI loyalty’ would help to resolve the potential conflict of interests between users and manufacturers of AI systems. ‘There’s a lot of systems out there that are not transparent about whose incentives they’re aligned with,’ he notes; ‘The whole idea behind loyalty is that there’s transparency between where these incentives are aligned’. Matthews also points out the framework’s failure to address AGI directly. All the same, Gutierrez calls the framework a ‘first step’ to influence ‘not the law of land […] but the practice of the land’.

Improve the News Improves its Numbers

The news website founded by Max Tegmark to help you stay up-to-speed on FLI-relevant topics like artificial intelligence and nuclear weapons doubled its newsletter subscription during the past month, and you can listen to it as a daily news podcast on Apple and Spotify. Each day, machine learning reads about 5,000 articles from about 100 newspapers, and figures out which ones are about the same stories. Then, for each major story, an editorial team extracts both the key facts (that all articles agree on) and the key narratives (where the articles differ). It includes a refreshingly broad range of narratives and sources, and promotes understanding rather than hate by presenting competing narratives compellingly rather than mockingly.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.
FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024

Future of Life Institute Newsletter: Wrapping Up Our Biggest Year Yet

A provisional agreement is reached on the EU AI Act, highlights from the past year, and more.
December 22, 2023
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram