Skip to content

Disrupting the Deepfake Pipeline in Europe

Leveraging corporate criminal liability under the Violence Against Women Directive to safeguard against pornographic deepfake exploitation.
Published:
February 22, 2024
Author:
Alexandra Tsalidis
Adapted from Mark J Sebastian, CC BY-SA 2.0, via Wikimedia Commons

Contents

Today, it is easier than ever to create exploitative deepfakes depicting women in a sexual manner without their consent – and the recently negotiated EU directive combating violence against women could finally bring justice for victims by holding the AI model developers criminally accountable.

Deepfakes refer to AI-generated voices, images, or videos produced without consent, and the most popular type of deepfake, comprising at least 96% of instances, is pornographic. Women and girls make up 99% of victims. Many of these victims will remain unaware that they have been the subject of a deepfake for months after the fact, during which the content garners thousands, sometimes millions, of views.

Given the widespread popularity of deepfake-generating AI systems, the most effective approach to counter deepfakes is for governments to institute comprehensive bans at every stage of production and distribution. Mere criminalization of deepfake production and sharing is insufficient; accountability must extend to the developers, model providers, service providers, and compute providers involved in the process.

Nevertheless, it is not necessarily illegal to create a sexually explicit deepfake in Europe. The final text of the EU AI Act would only require transparency obligations for providers and users of certain AI systems and general-purpose AI models under Article 52. These types of disclosure obligations do very little to mitigate the harms of pornographic deepfakes, given that in the majority of cases the content is consumed with full understanding that it is not truthful. As such, the defamation laws of most EU Member States tend to be equally unhelpful for victims.

The forthcoming directive on combating violence against women could change that. On February 6, 2024, legislators reached a political agreement on rules aimed at combating gender-based violence and protecting its victims. The Directive specifically addresses deepfakes, describing them as the non-consensual production, manipulation, or alteration of material which makes it appear as though another person is engaged in sexual activities. The content must “appreciably” resemble an existing person and “falsely appear to others to be authentic or truthful” (Recital 19).

Publishing deepfakes would be considered a criminal offence under Article 7, as that would constitute using information and communication technologies to make sexually explicit content accessible to the public without the consent of those involved. This offence applies only if the conduct is likely to cause serious harm.

At the same time, aiding, abetting, or inciting the commission of Article 7 would also be a criminal offence under Article 11. As such, providers of AI systems which generate sexual deepfakes may be captured by the directive, since they would be directly enabling the commission of an Article 7 offence. Given that many sites openly advertise their model’s deepfake capabilities and that the training data is usually replete with sexually explicit content, it is difficult to argue that developers and providrs play an insignificant or auxiliary role in the commission of the crime.

The interpretation of Article 11 could be a crucial first step for dismantling the pipeline which fuels sexual exploitation through deepfakes. The broadest reading of Article 11 would imply that developers are subject to corporate criminal liability.

One important hurdle is that corporate criminal liability does not apply uniformly across Europe, with some Member States recognizing corporations as entities capable of committing crimes, while others do not. Nevertheless, the application of Article 11 in at least some jurisdictions would be a tremendous step towards stopping the mass production of sexual deepfakes. Afterall, jurisdiction is established based on territory, nationality, and residence according to Article 14.

The directive also briefly addresses the role of hosting and intermediary platforms. Recital 40 empowers Member States to order hosting service providers to remove or disable access to material violating Article 7, encouraging cooperation and self-regulation through a code of conduct. While this may be an acceptable level of responsibility for intermediaries, self-regulation is entirely inappropriate for providers who constitute the active and deliberate source of downstream harm.

The final plenary vote is scheduled for April. The capacity for this directive to protect women and girls from being exploited through harmful deepfakes rides on whether the companies commercializing this exploitation are also held criminally liable.

This content was first published at futureoflife.org on February 22, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Realising Aspirational Futures – New FLI Grants Opportunities

Our Futures Program, launched in 2023, aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. This year, as […]
February 14, 2024

Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
November 30, 2023

Protect the EU AI Act

A last-ditch assault on the EU AI Act threatens to jeopardise one of the legislation's most important functions: preventing our most powerful AI models from causing widespread harm to society.
November 22, 2023
Our content

Some of our projects

See some of the projects we are working on in this area:

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram