FLI Open Letters
We believe that scientists need to make their voices heard when it comes to matters of emerging technologies and their risks. The Future of Life Institute has facilitated this dialogue in the form of many open letters throughout the years.
Add your signature to our most important published open letters:
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
March 22, 2023
Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI. In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine.
June 6, 2018
The Asilomar AI Principles, coordinated by FLI and developed at the Beneficial AI 2017 conference, are one of the earliest and most influential sets of AI governance principles.
August 11, 2017
Autonomous weapons select and engage targets without human intervention. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
February 9, 2016
There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
October 28, 2015
All our other open letters
Here are all of the open letters we have published:
The abhorrent Ukraine war has the potential to escalate into an all-out NATO-Russia nuclear conflict that would be the greatest catastrophe in human history. More must be done to prevent such escalation.
October 18, 2022
The emergence of artificial intelligence (AI) promises dramatic changes in our economic and social structures as well as everyday life […]
June 14, 2020
Given our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.
March 13, 2019
The following statement was read on the floor of the United Nations during the August, 2018 CCW meeting, in which […]
September 4, 2018
Nuclear arms are the only weapons of mass destruction not yet prohibited by an international convention, even though they are the most destructive and indiscriminate weapons ever created. We scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them and discovered that their effects are even more horrific than first thought.
June 19, 2018
As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel […]
August 20, 2017
January 11, 2017
Click here to view the Autonomous Weapons Open Letter for AI & Robotics Researchers.
February 9, 2016
Click here to view the Research Priorities for Robust and Beneficial AI Open Letter.
February 4, 2016
An open letter by a team of economists about AI’s future impact on the economy. It includes specific policy suggestions […]
January 25, 2016
Inspired by our Puerto Rico AI conference and open letter, a team of economists and business leaders have now launched […]
June 19, 2015