Skip to content

Can we rely on information sharing?

We have examined the Terms of Use of major General-Purpose AI system developers and found that they fail to provide assurances about the quality, reliability, and accuracy of their products or services.
Published:
October 26, 2023
Author:
Future of Life Institute

Contents

View as a PDF

The argument for applying AI Act obligations only at the end of the value chain is that regulation will propagate back. If an EU small and medium-sized enterprise (SME) has to meet safety standards under the EU AI Act, they will only want to buy general-purpose AI systems (GPAIS) from companies that provide enough information and guarantees to assure them that the final product will be safe. Currently, however, our research demonstrates that general-purpose AI developers do not voluntarily provide such assurance to their clients.

We have examined the Terms of Use of major GPAIS developers and found that they fail to provide downstream deployers with any legally enforceable assurances about the quality, reliability, and accuracy of their products or services.

Table 1: Mapping Terms of Use conditions from major general-purpose AI developers which would apply to downstream companies.

This table may not display properly on mobile. Please view on a desktop device.

OpenAI
Terms of Use
Meta
AIs Terms of Service
Google
API Terms, Terms of Service
Anthropic
Terms of Service
Inflection AI
Terms of Service
Services are provided “as is”, meaning the user agrees to receive the product or service in its present condition, faults included – even those not immediately apparent.✔️✔️✔️✔️✔️
Warranties, including those of quality, reliability, or accuracy, are disclaimed.✔️✔️✔️✔️✔️
The developer is not liable for most types of damages, including indirect, consequential, special, and exemplary damages.✔️✔️✔️✔️✔️
Liability is limited to $200 (or less) or the price paid by the buyer.✔️✔️✔️✔️
The developer is indemnified against claims arising from the user’s use of their models, only if the user has breached the developer’s terms.✔️✔️✔️✔️
The developer is indemnified against claims arising from the user’s content or data as used with the developer’s APIs.✔️✔️
The developer is indemnified against any claims arising from the use of their models.✔️

Note: In some jurisdictions, consumer protection laws are strong and will prohibit the disclaimer of implied warranties or certain types of damages, but this is less likely for business-to-business transactions.

All five companies examined have strict clauses disclaiming any warranties about their products (both express and implied) and stating that their products are provided “as is”. This means that the buyer is accepting the product in its current state, with any potential defects or issues, and without any guarantee of its performance. 

Furthermore, all five GPAIS developers inserted clauses into their terms stating that they would not be liable for most types of damages. For example, OpenAI states that they will not be liable “for any indirect, incidental, special, consequential or exemplary damages … even if [OpenAI has] been advised of the possibility of such damages”. In any case, they all limit their liability to a maximum of $200, or whatever the business paid for their products or services.

In fact, many even include indemnity1 clauses, meaning that under certain circumstances the downstream deployer will have to compensate the GPAIS developer for certain liabilities if a claim is brought against them. Anthropic, which has the most far-reaching indemnity clause, requires that businesses accessing their models through APIs indemnify them against essentially any claim related to that access, even if the business did not breach Anthropic’s Terms of Service.

Given the asymmetry of power between GPAIS developers and downstream deployers who tend to be SMEs, the latter will probably lack the negotiating power to alter these contractual terms. As a result, these clauses place an insurmountable due diligence burden on companies who are likely unaware of the level of risk they are taking on by using these GPAIS products.

This content was first published at futureoflife.org on October 26, 2023.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Disrupting the Deepfake Pipeline in Europe

Leveraging corporate criminal liability under the Violence Against Women Directive to safeguard against pornographic deepfake exploitation.
February 22, 2024

Realising Aspirational Futures – New FLI Grants Opportunities

Our Futures Program, launched in 2023, aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. This year, as […]
February 14, 2024

Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
November 30, 2023
Our content

Some of our projects

See some of the projects we are working on in this area:

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram