Skip to content
The Future of Life Institute
Presents

Superintelligence Imagined

Creative Contest on the Risks of Superintelligence

A contest for the best creative educational materials on superintelligence, its associated risks, and the implications of this technology for our world.
5 prizes at $10,000 each | Free entry
Apply by 31 August 2024

Contest Notifications

Be notified about critical contest updates, timeline reminders, and contest events such as workshops and co-working sessions.
Contents

Contest Brief

Introduction

Several of the largest tech companies, including Meta and OpenAI, have recently re-affirmed their commitment to building AI that is more capable than humans across a range of domains — and they are investing huge resources toward this goal.

Surveys suggest that the public is concerned about AI, especially when it threatens to replace human decision-making. However, public discourse rarely touches on the basic implications of superintelligence: if we create a class of agents that are more capable than humans in every way, humanity should expect to lose control by default.

A system of this kind would pose a significant risk of human disempowerment, and potentially our extinction. Experts in AI have warned the public about the dangers of a race to superintelligence, signed a statement declaring that AI poses an extinction risk, and even called for a pause on frontier AI development.

We define a superintelligent AI system as any system that displays performance beyond even the most expert of humans, across a very wide range of cognitive tasks.

A superintelligent system might consist of a single very powerful system or many less-powerful systems working together in concert to outperform humans. Such a system may be developed from scratch using novel AI training methods, or might be the result of a less-intelligent AI system that is able to improve itself (Recursive self-improvement), thus leading to an intelligence explosion.

Such a system may be coming soon: AI researchers think there is a 10% chance that a system of this kind will be developed by 2027 (Note: Prediction is for “High-Level Machine Intelligence” (HLMI), see 3.2.1 on page 4).

The Future of Life Institute is running this contest to generate submissions that explore what superintelligence is, the dangers and risks of developing such a technology, and the resulting implications and consequences for our economy, society, and world.

Your submission

Your submission should help your chosen audience to answer the following fundamental question:

What is superintelligence, and how might it threaten humanity?

We want to see bold, ambitious, creative, and informative explorations of superintelligence. Submissions should serve as effective tools for prompting people to reflect on the trajectory of AI development, while remaining grounded in existing research and ideas about this technology. And finally, they should have the potential to travel far and impact a large audience. Our hope is that submissions might reach audiences who have not yet been exposed to these ideas and educate them on superintelligent AI systems and prompt them to think seriously about its impacts on the things that matter to them.

Mediums

Any textual / visual / auditory / spatial format (eg. video, animation, installation, infographic, videogame, website), including multi-media is eligible. The format can be physical or digital, but the final submission must be hosted online and submitted digitally.

Successful applicants might include videos, animations, and infographics. We would love to see more creative mediums like videogames, graphic novels, works of fiction, digital exhibitions, and others, as long as they meet our judging criteria.

Prizes

Prizes will be awarded to the five applications that score most highly against our judging criteria, as voted by our judges. We may also publish some ‘Honorable mentions’ on our contest website — these submissions will not receive prize money.

Entry to the contest is free.

Recommended reading

You might wish to consult these existing works on the topic of superintelligence:

Introduction to Superintelligence

These resources provide an introduction to the topic of superintelligence with varying degrees of depth:

The risks of Superintelligence, and how to mitigate them

The following resources provide arguments for why superintelligence might pose risks to humanity, and what we could do to mitigate those risks:

Arguments against the risks of Superintelligence

Not everybody agrees that superintelligence will pose a risk to humanity. Even if you agree with our view that they do, your target audience may be sceptical — so you may wish to address some of these counter-arguments in your artwork:

This short list is taken from Michael Nielsen’s blog post.

Hosting events

If you would like to host an event or workshop for participants to produce submissions for this contest, you are welcome to do so. Please contact us to ensure that your event structure is consistent with our eligibility criteria, and if you would like any other additional support.

Your Submission

Your submission must consist of:

  • Creative material (must be digitised, available at a URL)
  • Summary of your artwork (2-3 sentences) describing your submission
  • Team details: Member names, emails, locations, roles in the submission

See the application form for a full list of fields you will need to complete. You can save a draft of your submission, and invite others to collaborate on a single submission.

Contest Timeline

31 May 2024
Open for submissions
31 August 2024
Submissions deadline
October 2024
Judging completed
October 2024
Winners announced
Create an application

The submission materials will remain the Intellectual Property (IP) of the applicant. By signing this agreement, the applicant grants FLI the worldwide, irrevocable, and royalty-free license to host and disseminate the materials. This will likely involve publication on the FLI website and inclusion in social media posts.

Applicants must not publish their submission materials prior to the announcement of the contest winners, except for the purposes of making the submission available to our judges. After winners are announced, all applicants can publish their submission materials in any channels they like — in fact, we encourage them to do so.

Winning submissions and Honorable mentions will be published on our contest webpage, shared on our social media channels, and distributed to our audience so that they can be seen and enjoyed. These materials will remain on our site indefinitely.

Materials that were published prior to the contest submission deadline are not eligible for this contest. This contest is intended to generate new materials.

Applicants are allowed to submit as many applications as they like, but can only receive a single prize. Applicants can only work in a single team, but they can make submissions both as a team and as individuals.

Teams must divide the winning money between all team members.

After the winners are announced, submissions featured on our website must be made available to our users at no charge. Therefore you may wish to produce a free demo of your submission if you would like to charge money for the full version elsewhere.

Join the community

We can make bigger and better things as a team. If you would like to form or join a team, please join our digital community space to network with other applicants, find people with complementary skills, share project ideas, and collaborate.

Join the community

Judges

Judges are still being secured and will be announced partway through the contest.
Judge
To be announced
Judge
To be announced
Judge
To be announced
Judge
To be announced
Judge
To be announced
Judge
To be announced

Criteria

Submissions will be assessed according to the following criteria:

Effective at conveying the risks of superintelligence

Ultimately we want to see submissions that can increase awareness and engagement on the risks of superintelligence. Therefore it is crucial that your artwork communicates these risks in a way that is effective for your intended audience.

For example, this might involve carefully researching your audience, testing a prototype, and integrating their feedback.

We consider this criteria to be most important — entries which suffer on this criteria are very unlikely to succeed.

Scientific accuracy and plausibility

We are looking for entries which present their ideas in a way that is consistent with the science of AI and the associated risks, and/or describe scenarios that are plausible in the real world. If your artwork presents the risks in a way that is unrealistic, it is unlikely to motivate an audience to take the issue seriously.

For example, you might wish to study some of the existing literature on smarter-than-human AI/superintelligence in order to ground your artwork in existing ideas and frameworks, or provide your artwork with some technical details.

Accessibility and portability

We want your submissions to be seen and enjoyed by a large population of people. We will consider it a bonus if your submission can be widely distributed without losing it’s effectiveness. Consider choosing a medium or format that can be enjoyed by many different audiences in many contexts — without losing sight of your intended core audience.

For example, you might choose to produce an e-book rather than a printed book, design a mobile-ready version of your website, or provide your artwork in multiple languages.

FAQs

  • Submissions must be provided in English (or at least provide an English version).
  • Entrants can be from anywhere in the world (though to receive prize money they must be located somewhere we can send them money).
  • Entrants can be of any age.

Five winning entries will be decided by our panel of judges based on the three judging criteria. Each will be awarded a prize of $10,000 USD.

Teams must divide the winning money between all team members.

We may also award some 'Honorable mentions' to submissions which did not win a prize but deserve to be seen. These submissions will not receive any prize money but will be displayed on our site alongside the winning submissions.

Yes — though teams larger than 5 must contact taylor@futureoflife.org for approval. Each participant can only apply within a single team, but they are entitlted to apply both as a team and as an individual.
Yes, if it improves the quality of your final work – but you should be strategic when deciding whether elements of your artwork are human-made or AI-generated. Also, you must disclose how and where AI-generated content was used in your artwork.

Contact

If you have any questions about the contest, please don't hesitate to contact us.

Contest lead

Questions about the contest brief, eligibility, and judging criteria.
taylor@futureoflife.org

Contest operations

Questions about the application form and prize payouts.
grants@futureoflife.org

Contest Notifications

Be notified about critical contest updates, timeline reminders, and contest events such as workshops and co-working sessions.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram