Skip to content
project

Digital Media Accelerator

The Digital Media Accelerator supports digital content from creators raising awareness and understanding about ongoing AI developments and issues.

With AI companies recklessly racing unchecked to build increasingly powerful AI - including artificial general intelligence (AGI), despite having no way to control it - the rest of humanity is being hurtled towards a dangerous and uncertain future. Top AI experts agree that mitigating the risk of extinction from AI should be a global priority - with AI company leaders themselves even explicitly addressing the dangers of the technology they're building. It is not too late to change trajectory however. We can bring about a secure, prosperous future with controllable, reliable Tool AI - systems intentionally built to empower humans rather than replace us - if we're willing to take decisive action now.

However, public awareness and understanding remains limited. The FLI Digital Media Accelerator program aims to help creators produce content, grow channels, spread the word to new audiences. We're looking to support creators who can explain complex AI issues - such as AGI implications, control problems, misaligned goals, or Big Tech power concentration - in ways that their audiences can understand and relate to. By supporting various content formats across different platforms, we want to spark broader conversations about the risks of developing evermore powerful AI, and AGI.

We are keen to support creators who:

  • Already have a following, and are interested in raising awareness about AI safety and risk;
  • Have compelling ideas for impactful pieces of content to bring AI safety/risk awareness to new audiences; and,
  • We are especially eager to support creators interested in discussing AI safety in their content on an ongoing basis, incorporating it into their regular output.

Please see below for examples of content we may be interested in funding:

  • A series of YouTube explainer videos about AI safety topics
  • A channel, such as a TikTok account, dedicated to daily AI news
  • A Substack or newsletter dedicated to explaining new AI safety research papers
  • An existing account with a specific focus (e.g. education) creating a short series exploring labor impacts of advanced AGI
  • An existing tech updates channel sharing more about AI risk
  • A podcast series for the general listener on progress towards AGI and its implications

Please reach out to maggie@futureoflife.org if you have any questions.

Application form

Our work

Other projects in this area

We work on a range of projects across a few key areas. See some of our other projects in this area of work:

Superintelligence Imagined Creative Contest

A contest for the best creative educational materials on superintelligence, its associated risks, and the implications of this technology for our world. 5 prizes at $10,000 each.

The Elders Letter on Existential Threats

The Elders, the Future of Life Institute and a diverse range of preeminent public figures are calling on world leaders to urgently address the ongoing harms and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram