The risks we focus on
We are currently concerned by three major risks. They all hinge on the development, use and governance of transformative technologies. We focus our efforts on guiding the impacts of these technologies.
From recommender algorithms to chatbots to self-driving cars, AI is changing our lives. As the impact of this technology grows, so will the risks.
From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.
Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.
UAV Kargu autonomous drones at the campus of OSTIM Technopark in Ankara, Turkey - June 2020.
How we are addressing these issues
There are many potential levers of change for steering the development and use of transformative technologies. We target a range of these levers to increase our chances of success.
We perform policy advocacy in the United States, the European Union, and the United Nations.Our Policy work
We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.Our Outreach work
We provide grants to individuals and organisations working on projects that further our mission.Our Grant Programs
We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.Our Events
What we're working on
Read about some of our current featured projects:
2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.
The Elders, the Future of Life Institute and a diverse range of preeminent public figures are calling on world leaders to urgently address the ongoing harms and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
We are opening two new funding opportunities to support research into the ways that artificial intelligence can be harnessed safely to make the world a better place.
Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.
Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.
Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).
The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.
On 1-2 November 2023, the United Kingdom convened the first ever global government meeting focussed on AI Safety. In the run-up to the summit, FLI produced and published a document outlining key recommendations.
View all projects
Regular updates about the technologies shaping our world
Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world. It includes a summary of major developments in our cause areas, and key updates on the work we do. Subscribe to our newsletter to receive these highlights at the end of each month.
Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
A provisional agreement is reached on the EU AI Act, highlights from the past year, and more.
December 22, 2023
Defending the EU AI Act against Big Tech lobbying, the 2023 Future of Life Award winners, our new partnership on hardware-backed AI governance, and more.
December 4, 2023
Read previous editions
The most recent posts we have published:
Leveraging corporate criminal liability under the Violence Against Women Directive to safeguard against pornographic deepfake exploitation.
February 22, 2024
The Elders, Future of Life Institute and a diverse range of co-signatories call on decision-makers to urgently address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
February 14, 2024
Our Futures Program, launched in 2023, aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. This year, as […]
February 14, 2024
View all posts
The most recent policy papers we have published:
FLI Response to OMB: Request for Comments on AI Governance, Innovation, and Risk Management
FLI Response to NIST: Request for Information on NIST’s Assignments under the AI Executive Order
FLI Response to Bureau of Industry and Security (BIS): Request for Comments on Implementation of Additional Export Controls
Response to CISA Request for Information on Secure by Design AI Software
View all policy papers
Future of Life Institute Podcast
The most recent podcasts we have broadcast:
View all episodes