Skip to content

Our Position on AI

We oppose developing AI that poses large-scale risks to humanity, including via power concentration, and favor AI built to solve real human problems. We believe frontier AI is currently being developed in an unsafe and unaccountable manner.
Image: Our executive director Anthony Aguirre opening the 2024 Existential Hope hackathon at the Foresight Institute.
Published:
May 27, 2024

The Future of Life Institute’s mission is to steer transformative technology towards benefiting life and away from extreme large-scale risks.

Summary of our position

FLI’s stance on advanced artificial intelligence is:

  • FLI opposes developing or deploying AI technologies that pose large-scale and extreme risk to humanity. This includes societal risks such as AI-triggered political chaos and epistemic collapse, physical risks such as AI-enabled biological, nuclear, or cyber catastrophic risks, and existential risks such as loss of control of or to future superhuman AI systems. Instead, FLI favors development of safe AI with robust, effective institutions to govern it.
  • FLI opposes extreme power concentration. This includes governments using AI for Orwellian surveillance and control, giant AI corporations creating stifling near-monopolies amassing extreme economic, social, and political power, or future AI being empowered at the expense of humanity. FLI favors democratic decision processes, many-participant competitive markets, and human empowerment.
  • FLI believes that frontier AI is currently being developed in an unsafe, unaccountable, and ungoverned manner en route to smarter-than-human general-purpose AI (a.k.a. artificial superintelligence or superhuman AGI) that is likely to be unpredictable, uncontrollable, unalignable and ungovernable by existing institutions. Therefore FLI supports a pause on frontier AI experiments, and a moratorium on developing artificial superintelligence for at least 15 years. This should remain in place until it can be developed with safety guarantees commensurate with its risks, in pursuit of widespread benefit, and after a meaningful participatory process of global deliberation and consent.
  • FLI favors human-empowering AI built for solving real human problems rather than for power-seeking.The vast majority of what people want from AI, including prosperity, health advances, scientific breakthroughs, turbocharged custom education, and drudgery-saving automation, can be achieved without runaway general AI capabilities. Rather than making people economically obsolete and replacing them with machines, AI should help them flourish; rather than empowering and enriching a handful of political leaders, moguls, or machines, AI should empower and enrich humanity.

Below we elaborate on these positions, and indicate past and present work by FLI aligned with them.

Extreme risk

FLI opposes developing or deploying AI technologies that pose large scale and extreme risk to humanity or society. Instead, FLI favors development of safe AI and robust effective institutions to govern it.

AI can bring about incredible benefits for humanity. However, it also has the potential to cause widespread harm to people or society if sufficient steps are not taken to make it safe. Devastating AI-enabled risks include biological and chemical attacks, escalation of nuclear confrontation, widespread political chaos and epistemic collapse, extreme power concentration, economic disempowerment, loss of control to runaway artificial superintelligence, and more. Despite acknowledging theses dangers, either by accident or from misuse, the AI corporations developing and deploying these systems continue to do so at a reckless and increasing pace, often reportedly regarding safety as an afterthought.

It has become clear that regulatory intervention is critical to safeguard our shared future. It is the only mechanism to deliver the transparency, oversight and safeguards necessary to prevent escalating harms, mitigate catastrophic risks, and ensure AI’s benefits are shared by all. At the same time, it is clear that far greater investment in technical safety solutions is critical.

In line with this stated goal, FLI advocates for effective and meaningful governance of AI, including the implementation of robust licensing, auditing, oversight and liability regimes. To achieve this, we work with and advise lawmakers in the United States, European Union, United Nations and beyond. Effective AI governance also requires empowered institutions, and FLI works with key stakeholders to help design, staff, and support them. Alongside these regulatory efforts, FLI funds technical research to improve the quantitative safety of advanced AI systems, and executes educational campaigns designed to drive awareness of risks and necessary solutions.

Notable examples of our work on this topic include:

  • Advising on and advocating for critical AI governance frameworks and legislation, which includes strengthening the EU AI Act through the inclusion of general purpose systems, calling for meaningful oversight, licensing requirements and safety standards for AI development in the U.S., and providing AI governance recommendations to the UN.
  • Funding technical research into robust and quantitative AI safety solutions, including the first ever peer-reviewed grants program aimed at ensuring AI remains safe, ethical and beneficial, the FLI existential safety community, the Buterin safety research fellows, and our ongoing exploration of secure hardware-backed AI governance tools with Mithril Security, and the Quantitative AI Safety Initiative.
  • Convening and consulting with global experts and decisionmakers on the necessary approaches and considerations to realize inspirational futures and avoid large-scale risks. This includes convening the first large multistakeholder AI safety conference in 2015, our coordination and development of the Asilomar AI Principles at our Beneficial AI Conference in 2017, one of the first and most influential sets of governance principles in artificial intelligence, and our global partnership with The Elders highlighting the need for “long-view leadership” to tackle existential threats.
  • Educating the public and policymakers about the large-scale physical risks resulting from AI, such as AI-enabled chemical, biological, radiological and nuclear attacks. This includes the proliferation of lethal autonomous weapons, and the integration of AI into nuclear command and control, which we helped highlight with our 2017 open letter from researchers and our viral Slaughterbots and Artificial Escalation film series.
  • Working with civil society, industry leaders, academic experts, thought-leaders, and lawmakers to mitigate escalating and widespread harms from ungoverned AI, including our ongoing campaign to Ban Deepfakes.

Power Concentration

FLI opposes extreme power concentration. FLI favors democratic decision processes, many-participant competitive markets, and human empowerment.

Ungoverned and accelerating technological development has the potential to concentrate vast amounts of power within a handful of advantaged groups, organizations and individuals. This would have disastrous consequences, and nowhere is this more true than with artificial intelligence.

Governments could weaponize AI’s incredible capabilities to wield Orwellian levels of surveillance and societal control. We have also seen first-hand with social media how AI can be utilized to manipulate public discourse, and access to this kind of information control could be supercharged by advanced AI, leading to paralyzing “truth decay” and the effective collapse of meaningful democratic participation.

Giant AI corporations could create stifling monopolies, granting them vast amounts of economic, social and political power that could surpass elected governments. As a small group of companies hoard power and capital at the expense of jobs and market competition, entire industries and large populations will become increasingly dependent on them – with no satisfactory guarantees that benefits will be shared by all. Both of these scenarios could quickly become irreversible: once AI concentrates sufficient cross-domain power within a specific group, it would lock-in control by a small, ruling party or handful of corporations.

AI powerful enough to command large parts of the political, social, and financial economy is unfortunately also powerful enough to do so on its own. Uncontrolled artificial superintelligences could rapidly take over existing systems and amass increasing amounts of power and resources to achieve their objectives, at the expense of human wellbeing and control, quickly bringing about our near-total human disempowerment or even extinction.

AI-driven extreme-power concentration represents a catastrophic threat, and therefore FLI works actively to bolster democratic institutions and decision-making processes, establish and reinforce competitive markets with a multitude of participants, and empower individuals.

Notable examples of our work on this topic include:

  • Consistently and forcefully pushing for AI regulation and oversight, in the face of fierce opposition from Big Tech lobbying efforts. An example of this was our successful push to include foundation models in the scope of the EU AI Act, and then push for its adoption.
  • Collaborating on initiatives to improve epistemic infrastructure such as the Improve The News Foundation and Metaculus, which aim to facilitate and democratize access to the reliable information about the world despite narrative control efforts from powerful corporations, governments and special-interest groups.
  • Supporting and funding efforts that help create democratic mechanisms and increase inclusive participation in the alignment and control of AI systems, such as the Collective Intelligence Project and the AI Objectives Institute.
  • Developing institutions and mechanisms for fairly and justly distributing vast amounts of capital that will be accrued through advanced AI development and deployment, to combat the devastating effects of extreme resource concentration – most notably our ongoing Windfall Trust project.
  • A new grants program (coming soon) of up to $5M to support projects that directly address the dangers of power concentration, and initiatives to bolster existing and new (and perhaps AI-leveraged) democratization mechanisms.

Pausing Advanced AI

FLI supports a pause on frontier AI experiments, and a moratorium on artificial superintelligence.

The development of AI systems that surpass human expert capability at general tasks carries enormous risk. At best, such systems would constitute a “second species” of intelligence that would disempower huge swathes of humanity as labor, planning, culture generation, and likely much decision-making is delegated to machines. At worst, these systems could deceive, develop and pursue their own objectives, circumvent safeguards, exponentially self-improve, and self-replicate and displace humanity. Superintelligent systems are likely to be unpredictable, uncontrollable, unalignable and ungovernable by existing institutions. Through misalignment or misuse, this represents an existential threat to humanity, either through terminal disempowerment or literal extinction.

Despite this widely acknowledged threat, frontier AI is currently being developed in an unsafe, unaccountable, and ungoverned manner. Each new training run constitutes an experiment, as the capabilities and risks of the next generation of systems cannot be predicted. This is unacceptable, especially in light of polls repeatedly revealing widespread alarm at the pace and recklessness of advanced AI development. FLI has called for, and continues to support, a pause on such new training runs, until adequate oversight, safety, and regulatory mechanisms are in place.

The goal of these experiments, as stated by many AI corporations, is the development of superhuman AGI. With this in mind, FLI supports a moratorium on developing such superhuman AGI or superintelligence systems, for at least 15 years and until it can be done with satisfactory and provable safety guarantees that are commensurate with their enormous risks.

This development should also be accompanied by a meaningful participatory process of global deliberation and consent. The risks of these systems are borne by everyone, and therefore we must work to ensure a diversity of global voices are included in guiding their development. Our shared future should not be decided by a handful of tech billionaires.

Notable examples of our work on this topic include:

  • In March 2023, FLI published an open letter calling for a six month pause on giant AI experiments, which was signed by over 30,000 individuals including many of the world’s foremost experts, technologists, and industry leaders. It made global headlines and sounded the alarm on the out-of-control race to develop increasingly powerful and risky systems, which in turn kick-started regulatory conversations around the world.
  • FLI advises and works with the organizers of key AI policy convenings, such as the international series of AI Safety Summits, launched in the UK in December 2023, to help ensure that discussions include the risks of racing to develop increasingly powerful and risky advanced systems without sufficient safeguards. This includes the publication of our detailed comparison of different competing AI safety proposals. We’ve highlighted the weaknesses of “Responsible Scaling” approaches, which merely call for a voluntary self-pause under intense competitive pressure, after danger is manifest. As an alternative, FLI has put forth a safety standards policy that enforces requirements for safety before scaling.
  • Creating and disseminating media to illustrate artificial superintelligence and its implications to drive public awareness, and proportionate policy responses, including our communications/ad campaign “Before it Controls Us” ahead of the UK AI Safety Summit.
  • Producing and publishing material that explain the risks of developing artificial superintelligence, the need for verifiable security measures, and the benefits of instead developing controllable narrow and general-purpose systems. This includes the book Life 3.0 by FLI President Max Tegmark and Close the Gates to an Inhuman Future by FLI Executive Director Anthony Aguirre.

Human Empowerment

FLI favors human-empowering AI built to solve real human problems.

AI can deliver incredible benefits, solve intractable global problems, and realize inspirational futures. However, the vast majority of these benefits do not require artificial superintelligence and the massive risk that come with it.

As we have already seen, existing general purpose AI systems, when empowered by specialized training, have the potential to fundamentally and beneficially transform health, science, productivity and more.

This transformative empowerment can start today. We already have the capability to leverage AI to research cures for many diseases, provide individual tutors for every child on the planet, work toward clean energy solutions, and realize countless innovations. However, to achieve this we must intentionally invest in developing AI to solve specific problems and achieve identifiable goals (the Sustainable Development Goals, for example).

However, AI investment to date has been largely focused on power-seeking, not problem solving: developing very general systems that can replace people across as many sectors as possible, rather than being driven by particular needs. Despite promises of abundance, and vague predictions about utopian futures, there has been relatively little investment by Frontier AI corporations to deliver identifiable benefits or solve specific problems. They continue to make vague promises of future abundance created by AI, but history provides little confidence that said abundance will be shared. Instead, labor market forecasts and diverging living standards are stoking growing alarm and widespread discontent: ever more individuals find themselves economically replaced as a small group of companies hoard wealth at the expense of jobs.

We should be using AIs to help humanity flourish, not to replace and disempower people. Highly capable AI systems have not been created by the ingenuity of a handful of AI corporation leaders, but by a diverse group of researchers (educated with mostly public funds) creating systems that were trained on the collective sum of humanity’s knowledge, creativity, and experience. We believe that the opportunities and benefits of AI should be open to and widely shared by humanity, not consolidated into the hands of a few.

Notable examples of our work on this topic include:

 

Artificial intelligence is one of our key cause areas – see our cause area profile for an introduction. Read about our mission to learn why we focus on these issues, or discover our work on these issues. Find out more about us, our team, history and finances.

This content was published by the Future of Life Institute on May 27, 2024.

Related pages

Were you looking for something else?

Here are a couple of other pages you might have been looking for:

AI Existential Safety Community

We are building a community of AI researchers who want AI to be safe, ethical and beneficial.
View page

Past Volunteers

In the past, we have had the support of a team of dedicated volunteers.
View page

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram