HOW DOES THE EUROPEAN UNION (EU) UNDERSTAND ETHICAL AND UNETHICAL ARTIFICIAL INTELLIGENCE? NORMATIVE AI POLICY IN THEORY AND PRACTICE

Abstract: 

'Ethical AI' has become a catchphrase. Yet, scholars and engineers disagree on what precisely an “ethical Artificial Intelligence (AI) system” is (Bryson & Kime, 1998). While global industry proposes its own interpretation by means of voluntary guidelines, European policymakers counter argue with “ethical AI for Europe”. But does this commonly used term make “ethical AI” an inherently native European concept? Put differently, what is European about the EU’s approach to AI regulation?

The lack of literature and the most recent policy developments call for in-depth research on EU’s political intentions for regulating AI. Generally, the interplay of EU-specific geographical, political, cultural and normative elements and EU technology governance is insufficiently assessed in: It remains unclear how supranational norms and values contribute to the EU’s understanding of ‘good’ AI governance. Therefore, both policymakers and scientists would benefit from understanding how the EU envisages “good governance of AI” in light of its normative and value frameworks. This paper aims to address these gaps by defining the normative concepts of “ethical AI” and “unethical AI” in light of the EU’s political agenda. Next, this paper assesses two facial recognition application case studies and explains how its value frameworks contribute to the “European” way to regulate AI. In doing so, this paper aims to strike a cross-disciplinary balance between political science, science and technology studies (STS) and normative ethics. This paper opens up the discussion about ethics from a policy and technology perspective and more largely contributes to the development of a more systemic European technology policymaking agenda.

Mair et al. (2019) find that values and norms, based on national-cultural predispositions, are central in understanding political processes. These aforementioned elements are, however, not properly understood and therefore insufficiently consulted in policymaking. Therefore, two interrelated issues occur. Firstly, insufficient scientific investigation of norms and values in relation to (supra)national governance approaches. Secondly, the lacking understanding of how guiding normative and value frameworks impact political decision-making processes in the EU. This opens up following research questions:

Which normative and value frameworks delineate the understanding of “ethical AI” and “unethical AI” in the EU? How do AI applications contribute to the EU’s policy definition of “ethical AI” and “unethical AI”?

The theoretical part of the paper assesses data protection literature and demonstrates that privacy-safeguarding values notably determine European technology policymaking. In the empirical part, policy analysis and two cross-sectoral case studies closely describe “ethical AI” and “unethical AI”: First, the key norms and values encoded in the EU’s AI policy documents; and second, how these normative elements are weighted against each other. The paper then links these findings to two facial recognition system case studies and assesses how “unethical AI” systems contribute to the policy vision of “ethical AI”. Significant differences between the current value framework and actual operating AI systems are expected. This paper therefore argues for further research on the guiding values in technology policymaking in the EU.