Skip to content
All Newsletters

FLI April 2022 Newsletter

Published
May 3, 2022
Author
Will Jones

Contents

FLI April 2022 Newsletter

FLI Worldbuilding Contest Enters Reviewing Stage


The Future of Life Institute Worldbuidling Contest reached its deadline on Friday, 15th April, meaning no more applications will now be received. The reviewing process of all the applications has begun. After this, the best applications will be passed onto our panel of judges, who will decide the winners. So far, the FLI team, led by project coordinator Anna Yelizarova, couldn’t be happier with the number and range of responses, and look forward to sharing some of the amazing ideas on display, after the judges have selected the winners.

Earlier in April, Anna and Anthony Aguirre went on the Foresight Institute Podcast to discuss the contest and their own hopeful visions for the future of our world; listen here.

Tune in next month to see the aspirational futures which teams from all over the world have imagined, designed and visualised!

Policy


FLI Submits Feedback on NIST AI Risk Framework

This month, FLI submitted its comments on the first draft of the AI risk management framework developed by the National Institute of Standards and Technology (NIST). Our organization believes that the evolving capabilities of AI’s methods and applications make it necessary for the US government to provide the stakeholders developing and/or deploying these technologies with a soft law alternative to harmonize the assessment of their impact on individuals and groups. As a result, our feedback focuses on issues such as: catastrophic and unacceptable risks, loyalty of AI systems, the documentation of risk calculus, and the risk management of general purpose systems, among others.

Stay Informed on the EU AI Act

FLI’s EU Policy Researcher Risto Uuk (left) continues his biweekly AI Act Newsletter, offering a steady stream of updates and analysis on the proposed EU AI law. Especially with Melissa Heikkilä no longer writing her excellent POLITICO AI Decoded letter, if you’re someone trying to stay up to date with the latest on this vital, evolving piece of AI legislature, this letter is ideal for you. Read the newest edition of the newsletter here, and subscribe to stay in the loop in future. Meanwhile, check out the dedicated website for everything you need to know about the Act itself, and catch up with all the developments so far.

News & Reading

Can Policy Keep Up with Rapid AI Developments?

This piece in The Wall Street Journal explained how certain cities around the world – London, New York and Barcelona, amongst others – are ‘taking the lead’ in particular aspects of AI regulation. This, in an effort to mitigate AI problems, such as where algorithm uses are ‘adding to human-driven bias in hiring, policing and other areas’. Between feedback systems on algorithms, audits to remove algorithmic bias, communities empowered to demand and check that ‘technology meets agreed-upon specifications’, and cooperation on agreed principles, there is perhaps cause for optimism in the world of urban AI regulation.

Yet even as some European cities demonstrate appropriate caution around algorithmic risk, here WIRED reports a significant expansion in the application of facial recognition systems across the continent, most notably in law enforcement. And just the other week, two high-profile developments in machine learning proved once again how rapidly AI is progressing. Firstly, OpenAI’s DALL-E 2, which creates original images to match text prompts. One representative of OpenAI quoted in this MIT Technology Review article about DALLE-2 called it a ‘crucial step’ towards general intelligence. The same week, GoogleAI’s new language model, PaLM, as covered here, showed ‘impressive contextual understanding’, solving reasoning problems and even explaining jokes. Whether policy is moving fast enough, even in select cities, to prepare for these developments remains debatable.

According to this TechCrunch article, it’s a firm no: ‘Our societies are not prepared for AI — politically, legally or ethically. Nor is the world prepared for how AI will transform geopolitics and the ethics of international relations.’ Rather than controlling these hazardous shifts, we appear to have a greatly misplaced trust in AI systems, as explained by Melissa Heikkilä in her final AI: Decoded letter for Politico, with regards to the Dutch government algorithm scandal.


New age Weaponry in War, and in Law

This Los Angeles Times op-ed is just one of many such pieces assessing the new-age weaponry on display in the Russo-Ukrainian war. Here we saw that it’s not only drones: AI is also being used to find intel on troop movements; data-driven virtual warfare is ramping up, too. In terms of stopping an AI arms race, The LA Times urges, ‘There’s still time to act — and we should.’ Meanwhile, openDemocracy expressed concern ‘that for the sake of efficiency, battlefield decisions with lethal consequences are likely to be increasingly “blackboxed” – taken by a machine whose working and decisions are opaque even to its operator.’ This was in reference not only to lethal autonomous drones, but also how US company Clearview AI is offering facial recognition technology to Ukraine ‘for identifying enemy soldiers’. This same company had already proved immensely controversial at home, as reported in The Seattle Times: of this alarming progression, they wrote, ‘Despite opposition from lawmakers, regulators, privacy advocates and the websites it scrapes for data, Clearview has continued to rack up new contracts with police departments and other government agencies.’

Improve the News launches Daily News Podcast

Max Tegmark and the team at Improve the News have launched a daily news podcast that you can listen to on AppleApple and Amazon for busy people wishing to rise above controversies and bias. Each day, machine learning reads about 5,000 articles from about 100 newspapers, and figures out which ones are about the same stories. For each major story, an editorial team then extracts both the key facts (that all articles agree on) and the key narratives (where the articles differ). The app, and now the podcast, offer a broad range of narratives and sources, and aim to promote understanding rather than hate, by presenting competing narratives alongside each other without favoring one or the other.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram