Can we rely on information sharing?
Contents
The argument for applying AI Act obligations only at the end of the value chain is that regulation will propagate back. If an EU small and medium-sized enterprise (SME) has to meet safety standards under the EU AI Act, they will only want to buy general-purpose AI systems (GPAIS) from companies that provide enough information and guarantees to assure them that the final product will be safe. Currently, however, our research demonstrates that general-purpose AI developers do not voluntarily provide such assurance to their clients.
We have examined the Terms of Use of major GPAIS developers and found that they fail to provide downstream deployers with any legally enforceable assurances about the quality, reliability, and accuracy of their products or services.
Table 1: Mapping Terms of Use conditions from major general-purpose AI developers which would apply to downstream companies.
This table may not display properly on mobile. Please view on a desktop device.
OpenAI Terms of Use | Meta AIs Terms of Service | Google API Terms, Terms of Service | Anthropic Terms of Service | Inflection AI Terms of Service | |
---|---|---|---|---|---|
Services are provided “as is”, meaning the user agrees to receive the product or service in its present condition, faults included – even those not immediately apparent. | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Warranties, including those of quality, reliability, or accuracy, are disclaimed. | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
The developer is not liable for most types of damages, including indirect, consequential, special, and exemplary damages. | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Liability is limited to $200 (or less) or the price paid by the buyer. | ✔️ | ✔️ | ✔️ | ✔️ | |
The developer is indemnified against claims arising from the user’s use of their models, only if the user has breached the developer’s terms. | ✔️ | ✔️ | ✔️ | ✔️ | |
The developer is indemnified against claims arising from the user’s content or data as used with the developer’s APIs. | ✔️ | ✔️ | |||
The developer is indemnified against any claims arising from the use of their models. | ✔️ |
Note: In some jurisdictions, consumer protection laws are strong and will prohibit the disclaimer of implied warranties or certain types of damages, but this is less likely for business-to-business transactions.
All five companies examined have strict clauses disclaiming any warranties about their products (both express and implied) and stating that their products are provided “as is”. This means that the buyer is accepting the product in its current state, with any potential defects or issues, and without any guarantee of its performance.
Furthermore, all five GPAIS developers inserted clauses into their terms stating that they would not be liable for most types of damages. For example, OpenAI states that they will not be liable “for any indirect, incidental, special, consequential or exemplary damages … even if [OpenAI has] been advised of the possibility of such damages”. In any case, they all limit their liability to a maximum of $200, or whatever the business paid for their products or services.
In fact, many even include indemnity1 clauses, meaning that under certain circumstances the downstream deployer will have to compensate the GPAIS developer for certain liabilities if a claim is brought against them. Anthropic, which has the most far-reaching indemnity clause, requires that businesses accessing their models through APIs indemnify them against essentially any claim related to that access, even if the business did not breach Anthropic’s Terms of Service.
Given the asymmetry of power between GPAIS developers and downstream deployers who tend to be SMEs, the latter will probably lack the negotiating power to alter these contractual terms. As a result, these clauses place an insurmountable due diligence burden on companies who are likely unaware of the level of risk they are taking on by using these GPAIS products.
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.