Configuring the AI user: Policy, deployment and design

Abstract: 

AI and data science are at the forefront of policy developments in Europe, the US, China and worldwide, with the potential for significant economic and social benefits as well as competitive advantage. Recent policy communications have begun to focus on the risks and societal impacts of AI in order to balance the potential for innovation with the protection of citizen rights. For example the American AI initiative aims at ‘protecting American values’ to ensure AI technologies are understandable, trustworthy, robust, and safe (www.ai.gov, 2020), while the EU White paper on AI aims at a ‘European approach to AI’ based on trustworthy, safe and reliable AI services (EU 2020). Both the OECD and G20 have also published similar documents outlining recommended principles for ‘responsible stewardship of AI’. Each of these policy statements refers to ‘human-centred’ values and ‘multi-stakeholder’ engagement in policy, suggesting that policy is focused on the people who will use these technologies. Yet, despite the history of user-centred technology development, particularly in the field of human-computer interaction, the ‘user’ remains opaque within AI policy discourse.

This paper explores how the ‘user’ is configured in AI policy and whether and how this entity may differ from stakeholder, citizen or ‘human’ in relation to discourses on AI systems and impacts. Through analysis of the four key policy documents shaping ‘trustworthy’ AI worldwide (US, EU, OECD, G20), it explores various configurations of the ‘user’ and the implications of this for policy and governance efforts. For example, according to the EU White Paper on AI, “the lifecycle of an AI system [includes] the developer, the deployer (the person who uses an AI-equipped product or service) and potentially others (producer, distributor or importer, service provider, professional or private user)”. This distinction between deployer/user and ‘private user’ could create challenges for effective governance of communication platforms. Meanwhile the American AI Initiative distinguishes between ‘users and communities’ of autonomous vehicles, ‘human users’ of AI research output and ‘end users’ of Department of Defense AI capabilities, further complicating the potential for effective stewardship in contexts where all three could be present.

According to Grint & Woolgar (1997), assessing the impact of technology requires us to “separate the technology from some social group in the service of assessing the effects of one upon an another”. However, the ‘user’ of AI is frequently neutralised by the autonomy of the system, complicating the kind of ‘boundary work’ (ibid) required for policy and governance. AI policy documents may create expectations of trustworthy, even ‘ethical’ AI which might be used to reassure both governments and users (Kerr et al, forthcoming) rather than systematically considering and including users in AI design and development from the outset. This study contends that user-centred methods and techniques, developed over decades in HCI studies, are not considered in AI research and policy possibly because the ‘design’ process for AI is no longer the central focus. Abstract AI design is complete and policy is now focused on deployment. The question is where and when and for whom.