Frequently Asked Questions
- AI Open Letter: An open letter on maximizing the societal benefits of AI (Jan 11, 2015).
- Digital Economy Open Letter: An open letter by a team of economists about AI's future impact on the economy. It includes specific policy suggestions to ensure positive economic impact. (Jun 4, 2015)
- Autonomous Weapons Open Letter: An open letter from AI and robotics researchers urging a ban on offensive autonomous weapons beyond meaningful human control (Jul 28, 2015).
- Asilomar AI Principles: A set of principles to guide beneficial AI development (Jan 30, 2017).
- Open Letter to the United Nations Convention on Certain Conventional Weapons: An open letter from leaders of AI and robotics companies calling for a UN ban on lethal autonomous weapons (Aug 20, 2017).
- Pledge Against Lethal Autonomous Weapons: Companies and individuals pledge that they will not participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.
- 2018 Statement to the UN on Behalf of LAWS Open Letter Signatories: This statement was read on the floor of the United Nations during the August, 2018 CCW meeting, in which delegates discussed a possible ban on lethal autonomous weapons.
- 2019 Statement to the United Nations in Support of a Ban on LAWS: This statement was read on the floor of the United Nations during the March, 2019 CCW meeting, in which delegates discussed a possible ban on lethal autonomous weapons.
- Elon Musk donates $10M to keep AI beneficial: Press release for Elon Musk's donation and grants program (Jan 15, 2015).
- Request for proposals, FAQ, Timeline: Materials for the 2015 international grants competition - now closed (Jan 22, 2015).
- New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release about FLI grant program results (Jul 1, 2015).
- 2015 Grants Recommended for Funding: FLI grant awardees listing (Jul 1, 2015).
- AI Safety Research: Profiles of AI safety researchers funded by FLI (Oct 4, 2016).
- Elon Musk donates $2M to keep AGI beneficial: Press release for the second round of AI safety grants which will focus on AGI safety (July 25, 2018).
- Research priorities for robust and beneficial AI: A summary of the research areas covered by our grants program.
- A survey of research questions for robust and beneficial AI: A collection of example projects and research questions within each area.
Conferences and workshops
- The Future of AI: Opportunities and Challenges: FLI's first AI conference, San Juan, Puerto Rico (Jan 2, 2015).
- Algorithms Among Us - The Societal Impacts of Machine Learning: NIPS symposium co-organized and sponsored by FLI (Dec 10, 2015).
- AI, Ethics and Society: AAAI workshop co-organized by FLI, with grant winners presenting (Feb 13, 2016).
- Reliable Machine Learning in the Wild: ICML workshop sponsored by FLI (Jun 23, 2016).
- Intepretable Machine Learning for Complex Systems: NIPS workshop sponsored by FLI (Dec 9, 2016).
- Beneficial AI (BAI 2017) conference and workshop: Sequel to our Puerto Rico conference, hosted in Asilomar, California (Jan 6, 2017).
- Ethics of Value Alignment Workshop (Dec 10, 2017)
- Beneficial AGI conference and workshop: Follow up to our previous Puerto Rico and Asilomar conferences, San Juan, Puerto Rico (Jan 2, 2019)
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since it's founding in 2014. Find out more about our mission or explore our work.