Skip to content

Disrupting the Deepfake Pipeline in Europe

Leveraging corporate criminal liability under the Violence Against Women Directive to safeguard against pornographic deepfake exploitation.
Published:
February 22, 2024
Author:
Alexandra Tsalidis
Adapted from Mark J Sebastian, CC BY-SA 2.0, via Wikimedia Commons

Contents

Today, it is easier than ever to create exploitative deepfakes depicting women in a sexual manner without their consent – and the recently negotiated EU directive combating violence against women could finally bring justice for victims by holding the AI model developers criminally accountable.

Deepfakes refer to AI-generated voices, images, or videos produced without consent, and the most popular type of deepfake, comprising at least 96% of instances, is pornographic. Women and girls make up 99% of victims. Many of these victims will remain unaware that they have been the subject of a deepfake for months after the fact, during which the content garners thousands, sometimes millions, of views.

Given the widespread popularity of deepfake-generating AI systems, the most effective approach to counter deepfakes is for governments to institute comprehensive bans at every stage of production and distribution. Mere criminalization of deepfake production and sharing is insufficient; accountability must extend to the developers, model providers, service providers, and compute providers involved in the process.

Nevertheless, it is not necessarily illegal to create a sexually explicit deepfake in Europe. The final text of the EU AI Act would only require transparency obligations for providers and users of certain AI systems and general-purpose AI models under Article 52. These types of disclosure obligations do very little to mitigate the harms of pornographic deepfakes, given that in the majority of cases the content is consumed with full understanding that it is not truthful. As such, the defamation laws of most EU Member States tend to be equally unhelpful for victims.

The forthcoming directive on combating violence against women could change that. On February 6, 2024, legislators reached a political agreement on rules aimed at combating gender-based violence and protecting its victims. The Directive specifically addresses deepfakes, describing them as the non-consensual production, manipulation, or alteration of material which makes it appear as though another person is engaged in sexual activities. The content must “appreciably” resemble an existing person and “falsely appear to others to be authentic or truthful” (Recital 19).

Publishing deepfakes would be considered a criminal offence under Article 7, as that would constitute using information and communication technologies to make sexually explicit content accessible to the public without the consent of those involved. This offence applies only if the conduct is likely to cause serious harm.

At the same time, aiding, abetting, or inciting the commission of Article 7 would also be a criminal offence under Article 11. As such, providers of AI systems which generate sexual deepfakes may be captured by the directive, since they would be directly enabling the commission of an Article 7 offence. Given that many sites openly advertise their model’s deepfake capabilities and that the training data is usually replete with sexually explicit content, it is difficult to argue that developers and providrs play an insignificant or auxiliary role in the commission of the crime.

The interpretation of Article 11 could be a crucial first step for dismantling the pipeline which fuels sexual exploitation through deepfakes. The broadest reading of Article 11 would imply that developers are subject to corporate criminal liability.

One important hurdle is that corporate criminal liability does not apply uniformly across Europe, with some Member States recognizing corporations as entities capable of committing crimes, while others do not. Nevertheless, the application of Article 11 in at least some jurisdictions would be a tremendous step towards stopping the mass production of sexual deepfakes. Afterall, jurisdiction is established based on territory, nationality, and residence according to Article 14.

The directive also briefly addresses the role of hosting and intermediary platforms. Recital 40 empowers Member States to order hosting service providers to remove or disable access to material violating Article 7, encouraging cooperation and self-regulation through a code of conduct. While this may be an acceptable level of responsibility for intermediaries, self-regulation is entirely inappropriate for providers who constitute the active and deliberate source of downstream harm.

The final plenary vote is scheduled for April. The capacity for this directive to protect women and girls from being exploited through harmful deepfakes rides on whether the companies commercializing this exploitation are also held criminally liable.

This content was first published at futureoflife.org on February 22, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Poll Shows Broad Popularity of CA SB1047 to Regulate AI

A new poll from the AI Policy Institute shows broad and overwhelming support for SB1047, a bill to evaluate the risk of catastrophic harm posed by AI models.
23 July, 2024

FLI Praises AI Whistleblowers While Calling for Stronger Protections and Regulation 

We need to strengthen current whistleblower protections. Lawmakers should act immediately to pass legal measures that provide the protection these individuals deserve.
16 July, 2024

Future of Life Institute Announces 16 Grants for Problem-Solving AI

Announcing the 16 recipients of our newest grants program supporting research on how AI can be safely harnessed to solve specific, intractable problems facing humanity around the world.
11 July, 2024

Evaluation of Deepfakes Proposals in Congress

How do the leading US legislative proposals on the issue of deepfakes compare?
31 May, 2024
Our content

Some of our projects

See some of the projects we are working on in this area:

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.
Our work

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram