Stand with humanity.
Receive action alerts
Let others know

Find the right action for you
I’m new to AI risks
I’m concerned about powerful AI
I’m ready to be an AI safety advocate
New to AI Risks?
Key facts about AI today

AI capabilities are improving very fast
In just the last couple of years, AI models have learned to impersonate humans, generate life-like audio, images, and videos, beat the world’s best coders in competitions, and perform independent research.
Every few months, AI systems unlock new capabilities – and the rate of improvement is accelerating.

AI companies are building AIs to replace humankind
The world’s largest tech companies (Google, Amazon, Microsoft, Facebook) have stated that they are trying to build “AGI”: an AI system that can do all the tasks a human can do. Their ultimate goal is NOT to build AI products for you, but for your employer: AIs that can replace you.
Throughout history, automation has repeatedly caused people to lose their jobs... What’s new with AI is the speed and scale at which this will happen — and the potential for humans to lose control.

Experts are sounding the alarm
Thousands of experts, including leaders of AI companies, have warned about the massive risks of powerful AI systems — from bioweapons, to infrastructure attacks, to mass unemployment, even extinction. Countless public statements, open letters, and resignations indicate that these risks are very serious.
Despite this, policymakers are repeatedly bending to the will of big tech lobbyists and delaying even light, common-sense regulations on AI developers.
If you’re human, you’re already on the team.
Join us and represent #teamhuman. Since 2015, people like you have helped us fight for a human future, one in which humanity as a whole is empowered by AI tools to fulfill its potential, rather than replaced by artificial intelligence.
Here are just a few examples of how we've worked with people around the world to keep the future human:
- 33,000+ individuals signed the ‘Pause Giant AI Experiments’ open letter calling for a pause on the development of more powerful AI systems, including business leaders, community organizers, families, and concerned citizens. The letter sparked a global discussion on the topic of AI safety and ethics.
- Over 1,000+ people have participated in workshops, courses, and hackathons to build positive visions of the future with AI. These visions are shaping the narrative around which technologies we should build, and which we should not.
- The world’s religious communities are awakening to the potential benefits, and also the great risks, of powerful AI. They are making their voices heard, and providing moral leadership on the development of emerging technology.
- Thousands of individuals posted their stories on social media and signed open letters (including youth leaders, parents, academics, AI experts, and even former AI lab employees) to demonstrate their support when the fate of a 2024 California bill to protect people from AI risks was hanging in the balance.
- Over 100M+ people have watched and shared ‘Slaughterbots’, our viral video series on autonomous weapons that policymakers still refer to today.
Your voice matters; be ready to use it. Our Action Alerts enable you to take concrete action at exactly the moment it matters most.
We must keep control over the future of our world; We must stop the development of superhuman AI.
Receive action alerts
Subscribe to the FLI newsletter
Start the conversation that matters.

Assets for social media
Concerned about powerful AI?
Getting to the heart of the AI storm.
“AI-powered bots are going to destroy the world.”

Are we close to an intelligence explosion?

Could we switch off a dangerous AI?

Why You Should Care About AI Agents
Our plan to keep humans in control.

Show them what AI is capable of today.

Convincing deepfakes are generated with just a few seconds of audio

DEMO: Voter Turnout Manipulation Using AI

An entire film generated by AI. How realistic do you think it is?

China's slaughterbots show WW3 would kill us all.
Our recommended reads

AI 2027

A Narrow Path

The Compendium
I’m ready to be an AI safety advocate.
Put pressure on companies to excel in safety.

Go in-depth while you’re on the move.



Add your voice to the list of concerned citizens.
Open letter calling on world leaders to show long-view leadership on existential threats
Pause Giant AI Experiments: An Open Letter
Asilomar AI Principles
Participate in our programs and initiatives
Our projects
Digital Media Accelerator
Give to the cause.
We’ve hardly made a dent in our list of project ideas. Donations enable us to grow as an organisation and execute more of our plans.
Visit Our work to read more about the work we have done so far and the types of projects your donations would help support. Find out everything you need to know about donating on our dedicated page:

Back to the top
Did we miss something you need?
This webpage has been recently overhauled to provide more relevant and numerous opportunities for action.
Let us know if you have any feedback or comments. We're particularly eager to hear:
- If there is something you were looking for, but couldn't find.
- Ideas for resources you would find helpful.
- Things you especially did or didn't like about this page.