AI Policy – European Union

Europe has a robust AI industry and countries within the EU have continued to emphasize the importance of joining forces and showing a unified “European AI Alliance”. This is seen as a means to increase competitiveness with countries like the United States and China, and to ensure the respect of “European values”. 25 European countries signed a Declaration of cooperation on Artificial Intelligence in April 2018. Although some of these countries also have national AI initiatives, they emphasized the importance of working together to enhance research and deployment while dealing collectively with social, economic, ethical and legal questions.

In March 2018, the European Commission establishedHigh-Level Expert Group to gather expert input and develop guidelines for AI ethics, which will build upon a statement by the European Group on Ethics in Science and New Technologies. The High-Level Expert Group published ethical guidelines for trustworthy AI April 2019.

On April 25, 2018, the European Commission put out a communication that outlines the European approach to AI as focusing on three pillars: encouraging uptake, but remaining ahead of technological developments; preparing for socio-economic changes; and ensuring an ethical and legal framework. This informed the European Commission’s overarching approach for AI that focuses simultaneously on the ability of AI to boost the EU’s research and industrial capacity, and on ensuring that AI works in the service of European citizens. Another Commission communication was published April 2019 called “Building Trust in Human Centric Artificial Intelligence”.

One component of the “European AI Alliance” is the Digital Single Market strategy, which was adopted in May 2015 to enhance digital opportunities for people and businesses throughout Europe. This has since been updated in various ways, for example to further the availability of data throughout the EU, as well as to establish an EU Cybersecurity Agency and European certification scheme for digital products. On June 6, 2018, the European Commission proposed an updated Digital Europe program with the investment of €9.2 billion to align the next long-term EU budget 2021-2027 with increasing digital challenges. €2.5 billion of this is planned to help spread AI across the European economy and society and to build on the European approach on AI presented on April 25, 2018. The Digital Europe program will increase AI investments in research and innovation under Horizon Europe and expand access to AI for public authorities and businesses, for example by developing ‘European libraries’ of algorithms, and industrial data spaces for AI in Digital Innovation Hubs that would be accessible to all.

In January 2017, the European Parliament adopted a report on Civil Law Rules on Robotics. Based on the report, Parliament’s Committee on Legal Affairs (JURI) held a public consultation on the future of robotics and artificial intelligence. The aim of this consultation was to stimulate a debate and to seek views on how to address the ethical, economic, legal and social issues related to robotics and AI developments. The results and a summary of this consultation were made available in October 2017. Meanwhile, based on the recommendations in the report, the European Parliament voted on a resolution in February 2017 to regulate the development of artificial intelligence and robotics across the European Union. The Joint Declaration on the EU’s legislative priorities for 2018-19 additionally named data protection, digital rights, and ethical standards in artificial intelligence and robotics as a priority.

In May 2018, The General Data Protection Regulation (GDPR) – a wide-ranging regulation intended to strengthen and unify data protection for all individuals within the EU – went into effect. GDPR was approved by the EU Parliament on April 14, 2016 and replaces the Data Protection Directive 95/46/EC. It extends the scope of the EU data protection law to all foreign companies processing data of EU residents. GDPR implicates AI for several reasons including that it requires a certain amount of explainability, which can be challenging with “black box” AI systems. Article 22 states: “In particular, the controller must allow for a human intervention and the right for individuals to express their point of view, to obtain further information about the decision that has been reached on the basis of this automated processing, and the right to contest this decision.”

Additional Links and Resources

[return to AI policy home page]