Skip to content

Can we rely on information sharing?

We have examined the Terms of Use of major General-Purpose AI system developers and found that they fail to provide assurances about the quality, reliability, and accuracy of their products or services.
Published:
October 26, 2023
Author:
Future of Life Institute

Contents

View as a PDF

The argument for applying AI Act obligations only at the end of the value chain is that regulation will propagate back. If an EU small and medium-sized enterprise (SME) has to meet safety standards under the EU AI Act, they will only want to buy general-purpose AI systems (GPAIS) from companies that provide enough information and guarantees to assure them that the final product will be safe. Currently, however, our research demonstrates that general-purpose AI developers do not voluntarily provide such assurance to their clients.

We have examined the Terms of Use of major GPAIS developers and found that they fail to provide downstream deployers with any legally enforceable assurances about the quality, reliability, and accuracy of their products or services.

Table 1: Mapping Terms of Use conditions from major general-purpose AI developers which would apply to downstream companies.

This table may not display properly on mobile. Please view on a desktop device.

OpenAI
Terms of Use
Meta
AIs Terms of Service
Google
API Terms, Terms of Service
Anthropic
Terms of Service
Inflection AI
Terms of Service
Services are provided "as is", meaning the user agrees to receive the product or service in its present condition, faults included - even those not immediately apparent.✔️✔️✔️✔️✔️
Warranties, including those of quality, reliability, or accuracy, are disclaimed.✔️✔️✔️✔️✔️
The developer is not liable for most types of damages, including indirect, consequential, special, and exemplary damages.✔️✔️✔️✔️✔️
Liability is limited to $200 (or less) or the price paid by the buyer.✔️✔️✔️✔️
The developer is indemnified against claims arising from the user's use of their models, only if the user has breached the developer's terms.✔️✔️✔️✔️
The developer is indemnified against claims arising from the user's content or data as used with the developer's APIs.✔️✔️
The developer is indemnified against any claims arising from the use of their models.✔️

Note: In some jurisdictions, consumer protection laws are strong and will prohibit the disclaimer of implied warranties or certain types of damages, but this is less likely for business-to-business transactions.

All five companies examined have strict clauses disclaiming any warranties about their products (both express and implied) and stating that their products are provided "as is". This means that the buyer is accepting the product in its current state, with any potential defects or issues, and without any guarantee of its performance. 

Furthermore, all five GPAIS developers inserted clauses into their terms stating that they would not be liable for most types of damages. For example, OpenAI states that they will not be liable "for any indirect, incidental, special, consequential or exemplary damages ... even if [OpenAI has] been advised of the possibility of such damages". In any case, they all limit their liability to a maximum of $200, or whatever the business paid for their products or services.

In fact, many even include indemnity1 clauses, meaning that under certain circumstances the downstream deployer will have to compensate the GPAIS developer for certain liabilities if a claim is brought against them. Anthropic, which has the most far-reaching indemnity clause, requires that businesses accessing their models through APIs indemnify them against essentially any claim related to that access, even if the business did not breach Anthropic's Terms of Service.

Given the asymmetry of power between GPAIS developers and downstream deployers who tend to be SMEs, the latter will probably lack the negotiating power to alter these contractual terms. As a result, these clauses place an insurmountable due diligence burden on companies who are likely unaware of the level of risk they are taking on by using these GPAIS products.

This content was first published at futureoflife.org on October 26, 2023.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since it's founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
November 30, 2023

Protect the EU AI Act

A last-ditch assault on the EU AI Act threatens to jeopardise one of the legislation's most important functions: preventing our most powerful AI models from causing widespread harm to society.
November 22, 2023

Miles Apart: Comparing key AI Act proposals

Our analysis shows that the recent non-paper drafted by Italy, France, and Germany largely fails to provide any provisions on foundation models or general purpose AI systems, and offers much less oversight and enforcement than the existing alternatives.
November 21, 2023

Written Statement of Dr. Max Tegmark to the AI Insight Forum

The Future of Life Institute President addresses the AI Insight Forum on AI innovation and provides five US policy recommendations.
October 24, 2023

Some of our projects

See some of the projects we are working on in this area:

UK AI Safety Summit

On November 1-2, the United Kingdom will convene the first ever global government meeting focussed on AI Safety. In the run-up to the summit, FLI has produced and published a document outlining key recommendations.
September 29, 2023

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
September 7, 2023

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram