Skip to content
All documents

Manifesto for the 2024-2029 European Commission

Ahead of the 2024 European elections, FLI has published its recommendations for the next European Commission mandate. Our manifesto sets out key areas of focus for the Commission as it implements the AI Act, including giving the AI Office the resources it needs to flourish and rebooting the AI Liability Directive.

Author(s)
Policy Team
Project(s)
Date published
13 March, 2024
Last updated
28 March, 2024
View PDF

Contents

The Future of Life Institute (FLI) is an independent nonprofit organization with the goal of reducing large-scale risks and steering transformative technologies to benefit humanity, with a particular focus on artificial intelligence. Since its founding ten years ago, FLI has taken a leading role in advancing key disciplines such as AI governance, AI safety, and trustworthy and responsible AI, and is widely considered to be among the first civil society actors focused on these issues. FLI was responsible for convening the first major conference on AI safety in Puerto Rico in 2015, and for publishing the Asilomar AI principles, one of the earliest and most influential frameworks for the governance of artificial intelligence, in 2017. FLI is the UN Secretary General’s designated civil society organization for recommendations on the governance of AI and has played a central role in deliberations regarding the EU AI Act’s treatment of risks from AI. FLI has also worked actively within the United States on legislation and executive directives concerning AI. Members of our team have contributed extensive feedback to the development of the NIST AI Risk Management Framework, testified at Senate AI Insight Forums, participated in the UK AI Summit, and connected leading experts in the policy and technical domains to policymakers across the US government.


Europe must lead the way on innovating trustworthy AI

Policy recommendations for the next EU mandate

The rapid evolution of technology, particularly in artificial intelligence (AI), plays a pivotal role in shaping today’s Europe.

As AI capabilities continue to advance at an accelerated pace, the imperative to address the associated dangers becomes increasingly urgent. Europe’s future security is intricately linked to the formulation and implementation of measures that effectively mitigate the risks posed by AI technologies.

Myopic policies which fail to anticipate the possibly catastrophic risks posed by AI must be replaced with strategies that effectively combat emergent risks. Europe must continue leading the way on AI governance, as it has repeatedly shown that its digital policies create global ripple effects. Europe must seize this opportunity to ensure deployment of AI aligns with ethical considerations and prioritises the safety of individuals and societies.

Key Recommendations

  1. Ensure that the AI Office is robust and has the ability to perform the tasks it has been set.
  2. Reboot the AI Liability directive to safeguard against unchecked risks and ensure accountability.
  3. Actively involve civil society organisations in the drafting of the Codes of Practice.
  4. Issue clear, concise, and implementable AI Act guidance.
  5. Proactively foster international collaboration.
  6. Build relationships with national competent authorities and ensure seamless collaboration on enforcement.
  7. Secure the future of AI regulation by addressing the AI Office funding challenge.

The AI Act is a done deal. Now it’s time to implement it.

With the historic adoption of the AI Act, the world’s first comprehensive hard-law regulation of AI, the focus will shift to its effective implementation and enforcement. This also necessitates renewed attention to complementary legislation, particularly the AI Liability Directive (AILD), to establish a holistic regulatory framework and solidify the EU’s position as a global leader. Prioritising the following areas will ensure that the shared goal of trustworthy, innovative, and safe AI is achieved:

i. Ensure that the AI Office is robust and has the ability to perform the tasks it has been set.

To ensure the robustness and efficacy of the AI Office within the European Commission, a series of strategic recommendations should be implemented. Firstly, offering competitive salaries to attract and retain top talent is essential. Adequate remuneration not only motivates technical experts who are usually captured by industry but also reflects the value placed on their expertise. Moreover, appointing leaders who possess a deep understanding of AI technologies and the risks they pose is crucial in order to articulate the mission and objectives of the AI Office, garnering support and engagement from stakeholders within and outside the Commission.

Additionally, facilitating secondments from industry and civil society organisations, as the UK AI Safety Institute has done, can bring diverse perspectives and experiences to the AI Office, within the context of limited resources. Temporary exchanges of personnel allow for knowledge transfer and collaboration, enriching the office’s monitoring and enforcement capabilities.

Furthermore, seamless collaboration between governance and technical teams, supported by effective leadership, operations, and human resources management, is paramount. Mirroring the range of roles and salaries made available by entities like the UK AI Safety Institute, the AI Office must provide sufficient incentives to attract experts who will further the Office’s goals, as prescribed by the AI Act.

ii. Reboot the AI Liability Directive to safeguard against unchecked risks and ensure accountability.

As the EU moves past the elections, it’s necessary to resume work on the AI Liability Directive (AILD). The explosive growth of AI across manufacturing, healthcare, finance, agriculture and beyond demands a robust legal framework that provides victims with recourse for damages caused by AI, and thereby incentivises responsible development and deployment. Current Union fragmentation, resulting from disparate AI liability regimes, leaves citizens vulnerable under less protective liability approaches at the national level. It also leads to legal uncertainty that hinders European competitiveness and inhibits start-ups from scaling up across national markets.

The AILD would enable customers, both businesses and citizens, to understand which AI providers are reliable, creating an environment of trust that facilitates uptake. By establishing clear rules for different risk profiles – from strict liability for systemic GPAI models to fault-based liability for others – we can foster fairness and accountability within the AI ecosystem. As these frontier GPAI systems have the most advanced capabilities, they present a diverse range of potential and sometimes unpredictable harms, leading to informational asymmetries which disempower potential claimants. Moreover, the necessary level of care and the acceptable level of risk may be too difficult for the judiciary to determine in view of how rapidly the most capable GPAI systems are evolving.

Re-engaging with the Directive reaffirms the EU’s position as a global leader in AI regulation, complementing the AI Act and PLD to create a holistic governance framework. The implementation of harmonised compensatory measures, covering both immaterial and societal damages, ensures uniform protection for victims throughout the EU. By addressing liability comprehensively and fairly, the AI Liability Directive can unlock the immense potential of AI for good while mitigating its risks. This is not just about regulating technology, but about shaping a future where AI empowers humanity, guided by principles of responsibility, trust, and the protection of individuals and society.

See FLI’s position paper on the proposed AI Liability Directive here.

iii. Actively involve civil society organisations in the drafting of Codes of Practice.

It is essential for the Commission to actively involve civil society groups in the formulation of Codes of Practice, as sanctioned by Article 52e(3) and Recital 60s of the AI Act. Both are ambivalent about civil society’s role, stating that civil society “may support the process” with the AI Office, which can consult civil society “where appropriate”. Collaborating with civil society organisations on the drafting of Codes of Practice is crucial to ensure that the guidelines reflect the state of the art and consider a diverse array of perspectives. More importantly, Codes of Practice will be relied upon up to the point that standards are developed, a process which is itself far from being concluded. It is therefore crucial that the Codes of Practice accurately reflect the neutral spirit of the AI Act and are not co-opted by industry in an effort to reduce their duties under the AI Act.

Civil society groups also often possess valuable expertise and insights, representing the interests of the wider public and offering unique viewpoints on the technical, economic, and social dimensions of various provisions. Inclusion of these stakeholders not only enhances the comprehensiveness and credibility of the Codes of Practice, but also fosters a more inclusive and democratic decision-making process. By tapping into the wealth of knowledge within civil society, the Commission can create a regulatory framework that is not only technically robust but also aligned with European values, reinforcing the commitment to responsible and accountable AI development within the EU.

iv. Issue clear, concise, and implementable AI Act guidance.

Another key goal for the new Commission and AI Office is to commit to issuing timely, concise, and implementable guidance on AI Act obligations. Drawing from lessons learned with the implementation of past Regulations, such as the GDPR, where extensive guidance documents became cumbersome and challenging even for experts, the focus should be on creating guidance that is clear, accessible, and practical.

Article 3 section (2)(c) from the Commission’s Decision on the AI Office highlights its role in assisting the Commission in preparing guidance for the practical implementation of forthcoming regulations. This collaboration should prioritise the development of streamlined guidance that demystifies the complexities of specific duties, especially with regards to general-purpose AI (GPAI) models with systemic risk. The availability of clear guidance removes ambiguities in the text which can otherwise be exploited. It also makes duties for providers, such as high-risk AI system developers, comprehensible, especially for SME developers with limited access to legal advice. The Commission should view guidance as an opportunity to start building lines of communication with SMEs, including start-ups and deployers.

For example, Article 62 of the AI Act centres around serious incident reporting and calls on the Commission to issue guidance on reporting such incidents. The effectiveness of Article 62, in many ways, rides on the comprehensiveness of the guidance the Commission provides.

v. Proactively foster international collaboration.

As the new Commission assumes its role, it is critical that it empowers the AI Office to spearhead international collaboration on AI safety. In accordance with Article 7 of the Commission Decision establishing the AI Office, which highlights its role in “advocating the responsible stewardship of AI and promoting the Union approach to trustworthy AI”, it is essential for the Commission to ensure that the AI Office takes a leadership position in fostering global partnerships. The upcoming AI safety summit in South Korea in May 2024 and the subsequent one in France in 2025 present opportune platforms for the EU to actively engage with other jurisdictions. When third countries take legislative inspiration from the EU, the AI Office can steer international governance according to the principles it has established through the AI Act.

Given the cross-border nature of AI, and for the purpose of establishing legal certainty for businesses, the AI Office should strive to work closely with foreign AI safety agencies, such as the recently established AI Safety Institutes in the US, UK, and Japan respectively. Additionally, it must play a pivotal role in the implementation of global agreements on AI rules. In doing so, the EU can position itself as a driving force in shaping international standards for AI safety, reinforcing the Union’s commitment to responsible innovation on the global stage.

vi. Build relationships with national competent authorities and ensure seamless collaboration on enforcement.

In line with Article 59 of the AI Act, we urge the new Commission to closely monitor the designation of national competent authorities and foster a collaborative relationship with them for robust enforcement of the AI Act. The Commission should exert political capital to nudge Member States to abide by the 12-month timeline for designating notifying and market surveillance authorities by each Member State. While these national competent authorities will operate independently, the Office should maintain a publicly accessible list of single points of contact and begin building roads for collaboration.

To ensure effective enforcement of the AI Act’s pivotal provisions, Member States must equip their national competent authorities with adequate technical, financial, and human resources, especially personnel with expertise in AI technologies, data protection, cybersecurity, and legal requirements. Given the uneven distribution of resources across Member States, it is to be expected that certain Member States may require more guidance and support from the Commission and AI Office specifically. It is crucial that the AI Board uses its powers in facilitating the exchange of experiences among national competent authorities, to effectively ensure that differences in competencies and resource availability would not impede incident monitoring.

vii. Secure the future of AI regulation by addressing the AI Office funding challenge.

Establishing the AI Office as mandated by the AI Act is crucial for effective governance and enforcement. However, concerns arise regarding the proposed funding through reallocation from the Digital Europe Program, originally geared towards cybersecurity and supercomputing. This approach risks diverting resources from existing priorities while potentially falling short of the AI Office’s needs. Moreover, the absence of dedicated funding within the current MFF (2021-2027) further necessitates a proactive solution.

While the new governance and enforcement structure presents uncertainties in cost prediction, established authorities like the European Data Protection Supervisor (EDPS) offer valuable benchmarks. Only in 2024, the EDPS has a budget of €24.33 million and employs 89 staff members. Another relevant benchmark is the European Medicines Agency (EMA), with 897 employees and a 2024 budget of €478.5 million (out of which €34.8 million is from EU budget). The AI Office would require comparable financial resources to other EU agencies, as well as an additional budget stream for compute resources which are needed to evaluate powerful models. Recent reports suggest a budget of €12.76 million once the AI Office is fully developed in 2025, an amount that will fall short of securing the proper governance and enforcement of the AI Act. Therefore, we urge the Commission to take immediate action and:

  • Guarantee adequate funding for the AI Office until the next MFF comes into effect. This interim measure should ensure the Office can begin its critical work without resource constraints.
  • Negotiate a dedicated budget line within the MFF 2028-2034. This aligns with the strategic importance of the AI Office and prevents reliance on reallocations potentially compromising other programs.

Investing in the AI Office is not just a budgetary decision; it’s an investment in a robust regulatory framework for responsible AI development. By ensuring adequate funding, the Commission can empower the AI Office to effectively oversee the AI Act, safeguard public trust, and enable Europe to remain at the forefront of responsible AI governance.

Published by the Future of Life Institute on 13 March, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram