Skip to content

Protect the EU AI Act

A last-ditch assault on the EU AI Act threatens to jeopardise one of the legislation's most important functions: preventing our most powerful AI models from causing widespread harm to society.
Published:
November 22, 2023
Author:
Future of Life Institute
AI-generated image from Stability.AI with outpainting from DALL-E.

Contents

As the White House takes steps to target powerful foundation models and the UK convenes experts to research their potential risks, Germany, France, and Italy have proposed exempting foundation models from regulation entirely. This is presumably to protect European companies like Aleph Alpha and Mistral AI from what they proclaim is overregulation. This approach is problematic for several reasons.

AI is not like other products

Firstly, the argument that no other product is regulated at the model level – rather than the user-facing system level – is unconvincing. Companies such as OpenAI charge for access to their models and very much treat them as products. What’s more, few other products have the capabilities to provide people with malware-making, weapon-building, or pathogen-propagating instructions; this merits regulation.

General-purpose AI has been compared to a hammer because nothing in the design of the hammer can prevent users from harming others with it. Arguing on similar grounds, gun rights advocates contend that ‘guns don’t kill people, people kill people’. People are indeed flawed. They’re an essential contributor to any harm caused. However, regulatory restrictions on the original development and further distribution of any technology can reduce its destructive capacity and fatality regardless of its use, even if it falls into the wrong hands.

Downstream AI system developers and deployers will need to conduct use-case-specific risk mitigation. However, data and design choices made at the model level fundamentally shape safety and performance throughout the lifecycle. Application developers can reduce the risk of factual mistakes, but if the underlying model was more accurate and robust, then its subsequent applications would be significantly more reliable and trustworthy. If the initial training data contains inherent biases, this will increase discriminatory outputs irrespective of what product developers do. 

As the bedrock of the AI revolution, it’s reasonable that foundation model providers – seeking to supply their models to others for commercial benefit – should govern their training data and test their systems for cybersecurity, interpretability and predictability, which simply cannot be implemented at the system level alone. Mandatory internal and external model-level testing, like red teaming, is essential to verify capabilities and limitations to determine if the model is suitable for supply in the Single Market. 

As a single failure point, flaws in foundation models will have far-reaching consequences across society that will be impossible to trace and mitigate if the burden is dumped on downstream system providers. Disproportionately burdening application developers does not incentivise foundation model providers to design adequate safety controls safety controls and the European Digital SME Alliance has rightfully raised this point on behalf of 45,000 enterprises. Without hard law, major providers will kick the can down the road to those with inevitably and invariably less knowledge of the underlying capabilities and risks of the model.

Codes of conduct are non-enforcing

Secondly, codes of conduct, the favoured option of those advocating for foundation models to be out of the scope of AI rules, are mere guidelines, lacking legal force to compel companies to act in the broader public interest.

Even if adopted, codes can be selectively interpreted by companies, cherry-picking the rules they prefer, while causing fragmentation and insufficient consumer protection across the Union. As these models will be foundational to innumerable downstream applications across the economy and society, codes of conduct will do nothing to increase trust, or uptake, of beneficial and innovative AI. 

Codes of conduct offer no clear means for detecting and remedying infringements. This creates a culture of complacency among foundation model developers, as well as increased uncertainty for developers building on top of their models. Amid growing concentration, and diminishing consumer choice, why should they care if there’s ultimately no consequence for any wrongdoing? Both users and downstream developers alike will be unable to avoid their products anyway, much like large digital platforms.  

The voluntary nature of codes allows companies to simply ignore them. The European Commission was predictably powerless to prevent X (formerly Twitter) from exiting the Code of Practice on Disinformation. Self-regulation outsources democratic decisions to private power, whose voluntary – not mandatory – compliance alone cannot protect the public.

Model cards bring nothing new to the table  

Finally, the suggested model cards, introduced by Google researchers in 2019, are not a new concept and are already widely used in the market. Adding them into the AI Act as a solution to advanced AI does not change anything. One significant limitation of AI model cards lies in their subjective nature, as they rely on developers’ own assessments without third-party assurance. While model cards can provide information about training data, they cannot substitute thorough model testing and validation by independent experts. Simply documenting potential biases within a self-regulatory framework does not effectively mitigate them.

In this context, the European Parliament’s proposed technical documentation, expected to be derived from foundation model providers, is a comprehensive solution. The Parliament mandates many more details than model cards, including the provider’s name, contact information, trade name, data sources, model capabilities and limitations, foreseeable risks, mitigation measures, training resources, model performance on benchmarks, testing and optimisation results, market presence in Member States, and an optional URL. This approach ensures thorough and standardised disclosures, ameliorating fragmentation while fostering transparency and accountability.

Protect the EU AI Act from irrelevance 

Exempting foundation models from regulation is a dangerous misstep. No other product can autonomously deceive users. Controls begin upstream, not downstream. Voluntary codes of conduct and model cards are weak substitutes for mandatory regulation, and risk rendering the AI Act a paper tiger. Sacrificing the AI Act’s ambition of safeguarding 450 million people from well-known AI hazards to ensure trust and uptake would upset its original equilibrium – especially considering existing proposals which effectively balance innovation and safety. Despite pioneering AI regulation internationally, Europe now risks lagging behind the US, which could set global safety standards through American norms on the frontier of this emerging and disruptive technology.

This content was first published at futureoflife.org on November 22, 2023.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

FLI Statement on White House National Security Memorandum

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024

Paris AI Safety Breakfast #3: Yoshua Bengio

The third of our 'AI Safety Breakfasts' event series, featuring Yoshua Bengio on the evolution of AI capabilities, loss-of-control scenarios, and proactive vs reactive defense.
16 October, 2024

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

The second of our 'AI Safety Breakfasts' event series, featuring Dr. Charlotte Stix on model evaluations, deceptive AI behaviour, and the AI Safety and Action Summits.
14 October, 2024

US House of Representatives call for legal liability on Deepfakes

Recent statements from the US House of Representatives are a reminder of the urgent threat deepfakes present to our society, especially as we approach the U.S. presidential election.
1 October, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram