Click here to view the Research Priorities for Robust and Beneficial AI Open Letter.
Signatories
Close
How does verification work?
Verified signatures are those which we have taken one or more extra steps to confirm as legitimate:
• Direct contact - We have been in direct contact with this person to verify that they have signed the letter.
• Declaration URL - This person has made a public declaration of signing the open letter which can be viewed online.
• Declaration URL - This person has made a public declaration of signing the open letter which can be viewed online.
All published signatures, ‘verified’ or otherwise, are subject to several forms of verification: email verification, spam and duplicate filters, and a review by a member of our data vetting team.
OPEN LETTERS
Related posts
If you enjoyed this, you also might like:
Closed
Say No to the Federal Block on AI Safeguards
We must halt the Big Tech attempt to undermine AI safeguards.
6 June, 2025
2672
Open letter calling on world leaders to show long-view leadership on existential threats
The Elders, Future of Life Institute and a diverse range of co-signatories call on decision-makers to urgently address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
14 February, 2024
Closed
AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats
This joint open letter by Encode Justice and the Future of Life Institute calls for the implementation of three concrete US policies in order to address current and future harms of AI.
25 October, 2023
31810
Pause Giant AI Experiments: An Open Letter
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
22 March, 2023