Multistakeholder Engagement for Safe and Prosperous AI
Full title: Engagement and Education of the Public and Specific Stakeholder Groups to Deliver Safe and Prosperous AI Futures
In summary, this grant program:
- Seeks to fund projects that either:
- Educate and engage specific stakeholder groups on AI issues;
- Deliver grassroots outreach and organization with the public.
- Projects funded are likely to be in the $100k-$500k range, and the total value of the grant program will be up to $5M.
- Projects can be funded for up to three years in duration. Multi-year grant applications are welcome, but if approved, only the first year will be officially awarded and payed out initially, with future payments contingent on satisfactory progress.
- Individuals, groups or entities working in academic and other non-profit institutions are eligible (if you are not affiliated with a non-profit institution, there are many fiscal sponsors that can help administer your grant).
- Anybody can apply, but applications must be in English.
- There is an open call for Letters of Intent (LOI). The deadline for these LOI is Tuesday, February 4, 2025 at 11:59pm Eastern Time. This must include:
- A 200-word project summary;
- A 200-word impact statement;
- A 200-word statement on track record.
- Applicants will be informed by Tuesday, February 18, 2025 if they are invited to submit a full proposal.
See below for our project priorities and full eligibility criteria. Please direct any questions about this grant program to grants@futureoflife.org.
I. Summary
According to leading experts and the CEOs of AI corporations, capability advancements over next five years will see the creation of incredibly powerful systems—systems that outpace humans at virtually every task. It is almost certain that these will completely transform our society, our economy, and our very lives, for better or worse. Polls repeatedly reveal that industry, different communities, civil society, and the general public are rightly alarmed at the scale and speed of this transformation, and are anxious to make sure that corporations are developing and deploying controllable and reliable AI systems that are designed to address specific problems and improve lives everywhere.
To date, the conversation around AI and policy has not had sufficient input from all the segments of society set to be impacted. If we are to create a future where AI serves everyone, not just a select few, discourse and decision-making urgently needs participation from those affected, and we must do more to bring them to the table intentionally, and make their voices heard. To help address this, the Future of Life Institute is launching a new grants program of up to $5 million to support projects that work to meaningfully engage, educate, and activate key stakeholder groups, as well as the general public, to help realize secure and prosperous futures with AI.
We are looking for to make grants that focus on either:
- Working to educate and engage different specific stakeholder groups, bringing them to the table in the fight for positive AI futures.
- Public outreach and organization on AI issues at a grassroots level, helping them to make their voices heard.
For proposals in either of these “streams”, we are looking for grants that seek to grow and deepen the movement to deliver positive futures with AI. These are futures where advanced AI systems benefit everyone as tools, rather than as unpredictable and uncontrollable systems. For more information, and to assess alignment, see Our Position on AI.
II. FLI on Movement Building
The future of AI has profound consequences for everyone on earth, and its different communities should have a say in its continued development—especially when the stakes are so high. While recent years have seen incredible progress in driving awareness of the staggering implications of the technology for our economies, our societies, and our continued existence, the debate around advanced AI has remained something of a closed shop, limited to handful of tech executives, policymakers and think tanks.. While the enormous impacts on different stakeholder groups are acknowledged, little has been done to work with them individually, share relevant information on technological advancements, surface group-specific concerns, and provide them will opportunities for regional, national and international engagement. This massively harms the quality of discourse, hinders meaningful transparency and understanding of the technology and its effects, renders the AI discourse increasingly elitist and opaque, and hamstrings the creation of a safe, secure, and prosperous future with AI.
This can also cause lawmakers and other decision-makers to underestimate AI concerns within their constituencies, deterring them from taking the urgent action necessary to keep pace with AI’s dizzying rate of development. While many of them intellectually understand the many risks presented by an out-of-control race to develop increasingly powerful general-purpose AI, they are not convinced of widespread concern, and do not feel sufficient pressure to take action—especially in the face of enormous Big Tech influence and lobbying. Similarly, while there is substantial evidence of widespread concern within the general public around AI’s rapid advancement and increasing integration into their daily lives, a lack of grassroots education, engagement and organization causes this concern to be underestimated and underappreciated by lawmakers and other decision-makers.
The engagement and activation of key stakeholder groups, and the grassroots education and organization of the general public, would be instrumental in changing this.
Guidelines:
- We would like to see proposals that seek to engage and organize support for AI safety at a grassroots level, surfacing growing concerns and creating high-impact opportunities/platforms for mobilization.
- We have a strong preference for proposals that incorporate professional organizers, with the experience and expertise necessary to apply strategy/learnings from other successful grassroots campaigns.
- We would like to see proposals that seek to educate, engage and collaborate with specific stakeholder groups in efforts to deliver safe and beneficial AI. These groups could be demographic, professional, socioeconomic, religious, etc.
- Proposals should be evidence-based, drawing on learnings from other movement-building efforts to demonstrate grounds for success.
- Proposals should explain their approach, engagement and activation strategy, and why they are appropriate for the group in question. We have a strong preference for working with individuals/organizations who have considerable experience working with or for said groups.
- We will actively prioritize bipartisanship and inclusivity in our grant-making process, making sure to consider proposals that work to engage people, groups and organizations from across the political spectrum.
- Proposals with clear timelines and measurable metrics for success.
III. Possible Projects
Projects will fit this call if they are broadly consistent with the vision and scope put forth above. Examples of potential projects include:
- Proposals from within key stakeholder groups (e.g. associations, unions) that aim to educate members, explore and identify group-specific concerns/impacts, and encourage action from leadership.
- Projects which seek to work with specific political groups on AI issues, exploring ongoing and forecasted developments within the context of different positions, priorities and worldviews, as well as subsequent activation.
- Experienced organizers applying lessons from other successful campaigns to surface widespread anxieties about advancing AI, coordinate and grow public engagement/activism, and catalyze a movement that is impossible to ignore.
- Evidence-based efforts to deliver large-scale protests around imminent AI threats and lawmaker inaction, with clear plans, timelines, and reasons to believe the project will be successful.
- Projects which look to support and supercharge the growth of existing, nascent AI safety movements within specific groups or constituencies.
- Organizations or associations representing specific stakeholder groups (e.g. religious groups) who want to educate and engage their members on issues related to AI safety.
- New platforms or communications channels which seek to facilitate and coordinate safety engagement from the general public or specific constituencies, e.g. webinars, event series.
- Public education and awareness campaigns which look to spotlight stakeholder-specific AI issues, meet groups “where they are”, and drive AI safety engagement.
- Local mobilization efforts, which look to amplify geographically-specific AI anxieties and coordinate concerned individuals into highly-visible movements.
- Highly-targeted and compelling social media campaigns or online protest movements which look to engage influencers and millions of users in the fight for safe and secure AI.
- The development of digital tools that can help to educate and engage specific stakeholder groups or the general public on AI safety.
Examples of projects which are unlikely to receive funding:
- Projects which are not sufficiently specific in their movement-building approach or tactic e.g. Why that group? Why that particular organization strategy or form of grassroots activism? Proposals for “general support” of existing organizations are unlikely to succeed.
- Projects which do not include sufficient expertise or experience in the proposed means of engagement, organization, or activation.
- Projects which are not grounded in evidence and do not present compelling and robust explanations for why they will succeed.
- Proposals which lack timelines for success and tangible objectives.
- Projects which are dismissive of the enormous implications and risks presented by increasingly powerful and uncontrollable AI system.
- While we are eager to consider projects that seek to engage identifiable political constituencies, we are unlikely to support projects which are overly partisan in nature.
IV. Evaluation Criteria & Project Eligibility
Individual grants between $100k-500k, up to $5m in total, will be available to recipients in non-profit institutions, civil society organizations, industry associations, and academics for projects of up to three years duration. The number of grants bestowed is dependent on the number of promising applications. These applications will be subject to a competitive process of external and confidential expert peer review. Renewal funding is possible and contingent on submitting timely reports demonstrating satisfactory progress.
Proposals will be evaluated according to their relevance and expected impact.
Recipients could choose to allocate the funding in myriad ways, including:
- Recruitment of movement-building and community organization professionals to supercharge efforts.
- Providing support to existing organizations within/representing stakeholder groups to support AI safety education and engagement.
- Coordinating and organizing highly-visible public demonstrations on AI safety issues (online or in-person).
- Funding for specific new initiatives or even new organizations committed to educating and activating specific constituencies on AI safety issues.
- Planning and executing events, social media campaigns and other communications activities to engage groups and the general public.
- Conducting and disseminating research into the impacts of AI on specific communities to educate and activate them.
- Creating tools and channels for the engagement/activation of different groups, or to facilitate widespread activism.
V. Timeline and Application Process
There is an open call for Letters of Intent (LOI). The deadline for these LOI is Tuesday, February 4, 2025 at 11:59pm Eastern Time.
Applicants will be informed by Tuesday, February 18, 2025 if they are invited to submit a full proposal. If invited, full proposals will be due Tuesday, March 11th at 11:59pm Eastern Time.
We intend to announce our selected applications by the end of April 2025. It will then take some time to process payments, so please do not apply for work that starts before June 1, 2025.
Information Required for Letter of Intent:
- Contact information of the applicant;
- A project summary not exceeding 200 words, explaining the work;
- An impact statement not exceeding 200 words detailing the project’s anticipated impact for driving grassroots activation and stakeholder engagement in AI safety;
- A statement on track record, not exceeding 200 words, explaining previous work, research and qualifications relevant to the proposed project.
Information Required for Full Proposal:
- Name of tax-exempt entity anticipated to receive the grant;
- Contact information for the organization;
- A detailed description of the proposed project. The proposal should be at most 8 single-spaced pages, using 12-point Times Roman font or equivalent, including figures and captions, but not including a reference list, which should be appended, with no length limit. Larger financial requests are likely to require more detail;
- A detailed budget over the life of the award. We anticipate funding projects in the $100k-500k range. The budget must include justification and utilization distribution (drafted by or reviewed by the applicant’s institution’s grant officer or equivalent). Please make sure your budget includes administrative overhead if needed by your institute (15% is the maximum allowable overhead; see below);
- Curricula Vitae for all project senior personnel.
LOI and Proposals will undergo a competitive process of external and confidential expert peer review, evaluated according to the criteria described above.
VI. Background on FLI
The Future of Life Institute (FLI) is an independent non-profit, established in 2014, that works to steer transformative technology towards benefiting life and away from extreme large-scale risks. The present request for proposals is part of FLI’s Outreach Program.
FAQ
1. Who is eligible to apply?
Individuals, groups or entities working in academic and other non-profit institutions are eligible. Grant awards are sent to the applicant’s institution, and the institution’s administration is responsible for disbursing the awards. Specifically at universities, when submitting your application, please make sure to list the appropriate grant administrator that we should contact at your institution.
If you are not affiliated with a non-profit institution, there are many fiscal sponsors that can help administer your grant.
2. Can international applicants apply?
Yes, applications are welcomed from any country. If a grant to an international organization is approved, to proceed with payment we will seek to evaluate equivalency determination. Your institution will be responsible for furnishing any of the requested information during the due diligence process. Our grants manager will work with selected applicants on the details.
3. Can I submit an application in a language other than English?
All proposals must be in English. Since our grant program has an international focus, we will not penalize applications by people who do not speak English as their first language. We will encourage the review panel to be accommodating of language differences when reviewing applications.
4. What is the overhead rate?
The highest allowed overhead rate is 15%.
5. How will payments be made?
FLI may make the grant directly, or utilize one of its donor advised funds or other funding partners. Our grants manager can work with selected applicants on the details.
6. Will you approve multi-year grants?
Multi-year grant applications are welcome, but if approved, only the first year will be officially awarded and payed out initially, with future payments contingent on satisfactory progress.
7. How many grants will you make?
We anticipate awarding up to $5M in grants; the actual total and number of grants will depend of the quality of the applications.