Request for Proposals on religious projects tackling the challenges posed by the AGI race
Full title: Request for Proposals on religious projects tackling the challenges posed by the AGI race
See below for our project priorities and full eligibility criteria. Please direct any questions about this grant program to grants@futureoflife.org.
I. Executive Summary
The major artificial intelligence corporations are racing to develop systems that not merely exceed humans at particular cognitive tasks, but which are also general-purpose, superhuman at most tasks, and, crucially, autonomous enough to erode meaningful human control and replace human labour. This race is soaking up unprecedented amounts of capital, data and energy, and concentrating power in ever fewer hands; fewer and fewer people are getting a say in what the future will look like. Meanwhile, as new capability benchmarks and levels of agency are reached, myriad new risks to humanity are introduced, for which the world is almost entirely unprepared. The need has never been greater for wiser leadership, to provide moral red lines in order to avert peril. Religious leaders can provide this in unique ways. Equally, different groups with deeply held and fundamentally distinctive worldviews must get moving now to ensure that they can continue to pursue and protect their own particular visions of the good into the future. The time for religious groups to step up is now.
Initiatives supported by this program might involve academic research, policy efforts, public awareness campaigns, detailed surveying and polling of religious opinion on AI, the creation of resources and projects to equip leaders, policy efforts, public awareness campaigns, workshops, debates, forums or other convenings of academics and faith leaders at local, national and even regional levels. Responding to the speed of AI developments, FLI will look to support fast-moving initiatives whose early results can be seen within a year.
We are looking to support projects that focus on either:
Working to educate and engage different specific religious groups, bringing them to the table in the fight for a positive AI future.
Public outreach and organization on AI issues at a religious grassroots level, helping them to make their voices heard and protect their communities and values.
For proposals in either of these two “streams”, we are looking for grants that seek to grow and deepen the movement to deliver positive futures with AI. These are futures where advanced AI systems benefit everyone as tools, rather than as unpredictable and uncontrollable systems designed to replace people wholesale.
Among other things, we require project proposals to identify moral principles in the form of red lines that must not be crossed. It is obviously up to each group to decide what their red lines are, but if there are some future uses of AI that you consider totally unacceptable, we encourage you to mention these in your proposal and then make the corresponding red lines more specific and nuanced as part of your proposed project. For example, the May 2025 evangelical leaders’ letter to the President of the United States stated ‘we believe we should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control’; likewise, the recent Fraternity appeal to Pope Leo XIV proposed several red lines.
For more information, and to assess alignment, see Our Position on AI.
II. Background on Religion and AI
There are a growing number of initiatives from religious communities addressing the many theological, moral, political and social implications and challenges of AI. This often serves one of the two ends of the spectrum in terms of local and global concerns: either individual leaders at the community level answering specific pastoral questions around the use of available AI tools, or leaders of denominations starting to issue much needed statements. These statements begin to fill the gap of moral leadership, for both religious and secular society, left in the wake of the corporate race to build powerful technologies with little or no regard for the human consequences. Recent examples, alongside the two mentioned above, include Pope Leo himself addressing AI’s challenges, the American Security Fund’s Join Statement on AI Ethics, or faith leadership responses to recent policy debates. In between, there have been academic efforts to grapple with the philosophical implications of artificial intelligence. Religious scholarship has only just begun seriously to respond to AI that threatens to replace and undermine human activity and thought, which raises profound questions such as what it means to be a human, what the purpose of our existence ought to be, or to what end we should orient our societies. These are far from new questions for religious scholarship, but the answers must be newly and rigorously reapplied to the present and growing challenges of our moment.
There remains a need to join the dots between abstract reflections and the day-to-day developments and deployments of destabilising new technology in the real world – as well as how religious communities might decide to engage more actively with the changes being wrought upon the world by growing AI capabilities. Given the speed of capabilities advancements in the race towards AGI, it is a significant challenge for ecclesial structures, faith alliances, organizations, institutions and governing bodies, to stay abreast of the latest developments, risks and challenges posed by the technology, let alone face them proactively and draw red lines around AI developments deemed morally unacceptable – above all, to avoid systems that are uncontrollable. Religious leaders, pastors, priests, rabbis, imams etc. remain largely under-informed and unequipped to meet the needs, and represent the newly applied beliefs, of their communities, from a position not merely of reactive panic but of awareness and authority.
New initiatives are required for religious communities to stand a chance of developing their own doctrinally informed positions on the development and use of increasingly capable AI that seeks to replace human labor, thought, and perhaps humanity itself. Religious communities have an immense opportunity to ground the conversation on frontier AI in deeper religious wisdom, as well as stand up for the perspectives of their own traditions, rather than caving into pressure.
Guidelines:
The following lists some of the concerns religious initiatives may do well to address, both as research and conference topics but more importantly as the focuses of active initiatives:
Goals: How are religions to reckon with the philosophies and plans of the CEOs and thought-leaders of Silicon Valley? This includes the rejection of traditional morality; the desire for ultimate power, especially power concentrated in the hands of an unrepresentative secular elite; the aim to create an AI god or make gods of men with AI power; the notion of merging humans with machines; the potential for advanced AI to lock in particular value paradigms – namely the technocratic paradigm – and the implications this has for future religious communities; the ideologies of figures who express no concern and even excitement about humans being removed and replaced by a successor species; what this might mean for religious engagement with AI more broadly, and especially the pursuit of artificial general intelligence or superintelligence.
Replacement / Extinction: How should religious communities respond to large-scale risks, including the loss of control to AI systems and even our potential extinction? What do religious traditions have to say about handing over control of vital decision-making, in both military and civilian domains, to machines that lack a soul, a moral compass, accountability and human intuition?
Religious Liberty: AI systems are starting to affect all aspects of our lives, influencing vital decisions, how we manage our relationships, how we live our lives, how we define who we are – and how we explore, develop and live out our religious convictions. Power over these systems is concentrated in small, largely secular groups in the US and China. How can communities with profoundly different worldviews, cultures, traditions, beliefs, practices, observances, co-exist and flourish in the age of manipulative AI controlled, if at all, by a centralized elite? How can people of faith preserve the liberty to manage their communities according to their religious convictions, even or especially when that includes the choice to ‘opt out’ or reject particular technological applications? And, looking ahead, what should religious liberty look like in a world transformed by AI?
Representation of views: Gaining a better sense of what different religious communities think about AI and its growing dominance over their lives, including the race to create AI gods, and then working on making sure these views are heard and taken into account in AI development, deployment and governance.
Labor: AI is arguably the first revolution that will be more transformative than the industrial revolution, and may cause massive levels of economic, social and political disruption. What are the moral consequences of depriving individuals of the dignity of work? What effect will mass unemployment have on religious liberty, family life, and communities of faith? Are there tasks that groups believe should always remain in human hands? How might religious communities protect the dignity of the worker in public policy endeavors?
Positive futures: What future world do we actually want to work towards? Rather than passively accepting the way things are going, the products we are offered and the risks posed by them, how can religious groups be more proactive in imagining positive outcomes and working towards them (be that through tech adoption, development, governance or otherwise)? How can these futures safeguard the primacy of humanity over machines?
Protection of the vulnerable: AI companions, chatbots and AI agents are becoming increasingly manipulative, with superhuman persuasive abilities (harnessing personal data and precise algorithmic adaptation), preying on young people especially – as well as the weak and the lonely. What role can religious life play in combatting the disturbing trends of AI manipulating and isolating the vulnerable? What red lines must be drawn around AI capabilities to protect human agency and dignity from manipulation? How can they both hold companies accountable, encourage policy, but also develop more resilient and appealing embodied communities to meet people’s real social needs?
Communities: AI companions are furthermore seeking to replace human relationships and isolate the vulnerable from communities, taking away even more human agency and control from individuals all over the world. It is often noted, even by atheists, that religions have proved to be resilient and adaptive builders and cultivators of long-lasting communities. In the modern age of digital relationships and widespread social isolation, how can religious services and meetings provide reliable ways for people to build deep and meaningful human relationships? How can such appealing roots and bonds be preserved and, where changes brought by AI demand it (and communities agree to it), remolded in the years to come?
III. Possible Projects
Examples of FLI-supported religious projects:
The following have been supported within FLI’s Religions Initiative, either as bespoke grants, contracts, or within previous grants programs, and are thus useful examples of the kind of religious projects FLI within the scope of this new program:
- The Buddhism & AI Initiative
- Organized Intelligence – Latter-Day Saints Perspectives on AI
- The Nigerian Religious Coalition on AI
- Elevating the Church Committee (The Interdenominational Church Council on AI)
- IST and AI and Faith launch Religious Voices and Responsible AI Initiative
These projects vary immensely in their religious and cultural context, in their scope, and in aspects of their approach, but share an eagerness to bring one or more religious communities into AI discourse, and an urgent appreciation of the emerging threats introduced by the corporate race to AGI and ASI. All, finally, share a rapidity of progress, with, as mentioned, observable progress within the first year of funding.
IV. Evaluation Criteria & Project Eligibility
Individual grants between $30k-300k, up to $1.5m in total, will be available to recipients in non-profit organisations and institutions for projects of up to three years duration. The number of grants will be dependent on the number of promising applications. These applications will be subject to a competitive process of external and confidential expert peer review. Renewal funding is possible and contingent on submitting timely reports demonstrating satisfactory progress.
Due to the rapid development of AI, proposals will be prioritized based on the ability to demonstrate impact within one year. All projects will be evaluated for their overall relevance and expected impact. Proposals should include red lines as described in the Executive Summary.
Recipients can choose to allocate the funding in myriad ways, including:
Recruitment of movement-building and community organization professionals to supercharge efforts.
Providing support to existing religious organizations within/representing stakeholder groups to support AI safety education and engagement.
Funding for specific new religious initiatives or even new religious organizations committed to educating and activating specific religious communities on AI issues.
Planning and executing events, social media campaigns and other communications activities to engage particular religious leaders and communities.
Conducting and disseminating research into the impacts of AI on specific religious communities to educate and activate them.
Conducting research on many of the questions above around religious concerns with AI developments or specific religious hopes around AI futures.
Creating tools and channels for the engagement/activation of different groups, or to facilitate widespread activism.
Recipients may not use this funding for lobbying, as defined in US law, or to support political candidates; nor can they employ it for the advocacy of positions unrelated to AI.
V. Timeline and Application Process
There is an open request for proposals. The deadline for these proposals is Monday, 2 February, 2026 at 11:59pm Eastern Time.
We plan to make all funding decisions and notify applicants in May 2026.
Please apply here.
Information Required for Proposal:
Contact information of the applicant;
A project summary not exceeding 200 words, explaining the proposed work;
An impact statement not exceeding 200 words detailing the project’s anticipated impact for supporting religious leadership and engagement in providing moral leadership on AI and steering towards wiser futures;
A statement on track record, not exceeding 200 words, explaining previous work, research and qualifications relevant to the proposed project.
Name of tax-exempt entity anticipated to receive the grant;
Religious and, where appropriate, the denominational focus of the proposed initiative;
Contact information for the organization;
A detailed description of the proposed project. The proposal should be at most 8 single-spaced pages, using 12-point Times Roman font or equivalent, including figures and captions, but not including a reference list, which should be appended, with no length limit. This project description needs clearly to explain what concrete things will actually done, when, and who is responsible for what. Larger financial requests are likely to require more detail;
A detailed budget over the life of the award. We anticipate funding projects in the $30k-300k range. The budget must include justification and utilization distribution (drafted by or reviewed by the applicant’s institution’s grant officer or equivalent). Please make sure your budget includes administrative overhead if needed by your institute (15% is the maximum allowable overhead; see below);
Curricula Vitae for all project senior personnel.
Proposals will undergo a competitive process of confidential expert peer review, evaluated according to the criteria described above.
VI. Background on FLI
The Future of Life Institute (FLI) is an independent non-profit, established in 2014, that works to steer transformative technology towards benefiting life and away from extreme large-scale risks. The present request for proposals is part of FLI’s Outreach Program.
FAQ
1. Who is eligible to apply?
Teams or individuals affiliated with universities, seminaries, theological colleges, recognized religious institutions, denominationally-associated organizations, publications, convenors, faith charities, or as part of religious projects within other non-profits. The scope of these applicant’s focus could be as narrow as, say, a particular Christian denomination within one American state, or as broad as, for example, Muslim engagement across the Gulf States.
Individuals, groups or entities working in academic and other non-profit institutions are eligible. Grant awards are sent to the applicant’s institution, and the institution’s administration is responsible for disbursing the awards. Specifically at universities, when submitting your application, please make sure to list the appropriate grant administrator that we should contact at your institution.
If you are not affiliated with a non-profit institution, there are many fiscal sponsors that can help administer your grant.
2. Can international applicants apply?
Yes, applications are welcomed from any country. If a grant to an international organization is approved, to proceed with payment we will seek to evaluate equivalency determination and that no sanctions or other legal restrictions preclude granting the funds. Your institution will be responsible for furnishing any of the requested information during the due diligence process. Our grants manager will work with selected applicants on the details.
3. Can I submit an application in a language other than English?
All proposals must be in English. Since our grant program has an international focus, we will not penalize applications from people who do not speak English as their first language, so long as the proposal is clear and intelligible. We will encourage the review panel to be accommodating of language differences when reviewing applications.
4. What is the overhead rate?
The highest allowed overhead rate is 15%.
5. How will payments be made?
FLI may make the grant directly, or utilize one of its donor advised funds or other funding partners. Our grants manager can work with selected applicants on the details.
6. Will you approve multi-year grants?
Multi-year grant applications are welcome, but if approved, only the first year will be officially awarded and paid out initially, with future payments at FLI’s discretion, contingent on satisfactory progress.
7. How many grants will you make?
We anticipate awarding up to $1.5M in grants; the actual total and number of grants will depend on the quality of the applications, as well as on the scope of the recipient projects. Aware of the wide range of funding needs among religious communities keen to act on this topic, we are open to a range of requested amounts. They could span from $30,000 up to $300,000; we anticipate most requests to fall somewhere around $100,000-150,000.
Our other grant programs

2024 Grants

Multistakeholder Engagement for Safe and Prosperous AI

