Skip to content
All Newsletters

FLI June 2022 Newsletter

Published
1 July, 2022
Author
Will Jones

Contents

Worldbuilding Winners Announced

Today, we are delighted to announce the winners of the FLI Worldbuilding Contest. Check out our full list and winners video. It’s been an extensive screening, reviewing and judging process to whittle the hundreds of applications down, and finally to rank them. With so many good ideas and so much careful thought packed into these worldbuilds, the challenge to pick an overall winner was no mean feat. But without further ado, here is the list of prize-winners. Click on any of them to see the full application and find out more about the wonderful worldbuilders!

1st Place:

 

2nd Place:


4th Place:


Remember, you can still give your feedback on all of the finalist worldbuilds. There’s an abundance of good ideas in all of the shortlisted entries, not just the top-ranked winners, and we’d love to know what you think of them.

In the meantime, all that remains is for us to say is a thank you to ‘Barrow-Motion’ for producing such inspiring imagery for the contest (e.g. above), and finally, a huge congratulations to Mako and all the FLI Worldbuilding Contest Winners! Your future visions have really inspired us, and we hope they keep inspiring others.

Let’s Keep the Positive Futures Coming!

It turns out Hollywood writers are also excited about getting audiences hyped for the future, rather than just depressing them all the time. On June 8th, FLI Executive Vice-President Anthony Aguirre (far right, next to producer and screenwriter Keith Eisner, photography by Michael Lynn Jones) and Director of Autonomous Weapons advocacy Emilia Javorsky (left) attended a live panel event called ‘From Slaughterbots to Utopias’, hosted by Hollywood, Health and Society (HHS), exploring new visions for the future and the risks and benefits of AI-driven technologies. You can see the livestream here. Hollywood, Health and Society are now accepting submissions for their ‘Blue Sky Scriptwriting Contest‘, in partnership with FLI and the Writers Guild of America East.

For FLI, this is very much a follow-up initiative to the Worldbuilding Contest (see above), in that we’re looking for scripts taking place in an aspirational future, one the writers believe most people would actually like to live in, as well as one which is achievable from where we are now. As HHS put it, ‘Instead of the standard dystopian hellscape, think environmental sustainability, a fair and equitable social system, and a world without war’. The brief is basically to write a script for the pilot episode of a new TV series set in this plausible, positive future world, which ‘along the way’ explains to viewers how we got to that world. Find out more about this exciting contest here.

Other Policy and Outreach

Good Autonomous News from The Hague

Last week, Dutch Foreign Minister Wopke Hoekstra and Defence Minister Kajsa Ollongren set out the new Dutch government position on autonomous weapons regulation, agreeing to many of a key advisory committee’s recommendations. This means they agree to back a prohibition on fully autonomous weapons.

The Dutch government wishes to keep partially autonomous weapons systems, e.g. in missile defence, where humans can’t react fast enough. But it supports the development of new legal rules on autonomous weapons systems, for example in the form of a new protocol to the CCW. The ministers said the main goal here is to preserve human judgment when deploying a weapon system; only that way can the weapons abide by existing international law. This applies, they added, to any use of deadly force, including law enforcement.

More expansively, the Dutch government wishes military AI uses to place higher on the global agenda. And the concept of ‘explainable AI’ will now be the starting point for all Dutch AI policy – partially autonomous weapons being only an especially urgent sub-field here. New Zealand Disarmament Minister Phil Twyford called this ‘welcome news’, saying he looked forward to working with The Netherlands on new legally binding rules on autonomous weapons.

States Parties to Nuclear Ban Treaty Meet in Vienna

Last week, Georgiana Gilgallon, Director of Communications, and William Jones, Editorial Manager, represented FLI as civil society members at the ‘Nuclear Ban Week‘ in Vienna, with a view to renewing partnerships with other civil society organisations in the nuclear disarmament and non-proliferation space. The week consisted firstly of The Nuclear Ban Forum put on by ICAN (the International Campaign to Abolish Nuclear weapons), followed by the Vienna Conference on the Humanitarian Impact of Nuclear Weapons (arranged by the Austrian government), and culminated at the First Meeting of States Parties to the Treaty on the Prohibition of Nuclear Weapons (TPNW) at the UN Vienna (see right, chaired by President-designate Alexander Kmentt).

Despite a very different geopolitical landscape to when the Treaty was adopted in 2017, the states parties held fast to the condemnation of all nuclear threats, and their historic intent to rid the world of this tremendously risky weapons. They accepted a new Declaration and Action Plan committed to the future appointment of a scientific and technical advisory board, and the treaty’s positive obligations addressing the harm of nuclear weapons use and testing, among other things.

News & Reading

The Economist Podcast on Foundation Models

Earlier in June, The Economist released a podcast on recent AI developments, specifically on foundation models/’large language models’. For those unsure of what those are, this is a great summary. But it goes deeper into some of the issues, too. First, Economist Science Correspondent Alok Jha explains how foundation models are turning AI ‘into a service that can be injected into any area of human activity and profoundly change it’. Ludwig Siegele notes that these models can do unexpected things, and outlines the race to build ever bigger models.

Kate Crawford outlines how the ‘tiny handful of tech companies’ powering these models has led to a ‘concentration of power and meaning-making into fewer and fewer hands’. ‘Bias’, she says, doesn’t cover the problems arising out of this…It ‘goes beyond discriminatory language or images… to the very heart of whose stories are being told here’ – of ‘who gets to decide’. The Economist then followed up with a piece on the foundation model race and where it might take us. Oren Etzioni, of the Allen Institute for AI, estimates here that ‘80% of AI research is now focused on foundation models’, with Google, Meta, Tesla, Microsoft and Chinese developers all getting in on the act.

FLI’s Mark Brakel Explains AI Risks On Al Jazeera

Sandra Gathmann covers much of the basic ground on AI, its pressing risks and whether we stand a chance of mitigating them, in this Al Jazeera English ‘Start Here’ introductory video. FLI’s Director of European Policy, Mark Brakel, was consulted, and features in the video explaining some of AI’s most pressing dangers, such as autonomous conflict escalation. Mark illustrates one of the dangers of autonomous weapons with this scenario: ‘Imagine in the Taiwan Strait for example having a large group of American drones facing off Chinese drones, and they’re trained on classified data, how these two swarms of drones interact, and whether they might accidentally fire at one another because they mistake maybe some light reflecting off a drone for an attack. That could mean that you accidentally end up in a war, or even a nuclear conflict.’ Joanna Bryson, Founding member of the Centre for Digital Governance, and futurist and author Martin Ford also contribute to the video.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.
FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024

Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk

Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
30 August, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram