Discriminating algorithms: Who codes for whom?

Abstract: 

Over the last few years, several examples of technological devices or software-based services proved to be sexist, racist or in some other way discriminatory: “Image recognition technologies miscategorize black faces, […] chatbots easily adopt racist and misogynistic language […], and Uber’s facial recognition doesn’t work for trans drivers” (Myers West et al. 2019: 6) – these are only a few examples.

Software itself must not necessarily be inherently sexist or discriminating. However, algorithms depend on datasets that can be discriminatory, or chosen by software developers based on their respective beliefs and opinions that may be biased, especially when considering that software production is a specifically male-dominated area (Myers West et al. 2019: 5ff.). Furthermore, algorithms learn from and adapt to the information given back to them by (biased) users. Following this understanding, it makes sense that several researchers called attention to discrimination in the tech industry:

“The diversity problem is […] about gender, race, and most fundamentally, about power. It affects how AI companies work, what products get built, who they are designed to serve, and who benefits from their development.” (Myers West et al. 2019: 5)

Considering that we are living in a deeply mediatized and connected world (Hepp 2016), we need to acknowledge that software engineers have a significant role in shaping how we perceive the world, thus, they “make decisions that shape the way in which society functions, in turn helping shape our social futures” (Bialski 2019). The outcome of coding that can lead to intersectional discrimination has been in focus for quite some researchers already (e.g. Neerukonda et al. 2018), but more research is necessary on the making of algorithms. I want to present a preliminary study for my dissertation that focuses on how software is designed with an intersectional approach.

As Gillespie puts it: “Information systems are always swarming with people; we just can’t always see them” (Gillespie 2016: 26). To focus on the creating of algorithms I chose an ethnographic approach to accompany software engineers in different companies in their everyday work. I combine data from observations in the field with semi-structured interviews, in order to gain insight into their personal background, daily working routines, opinions and values. Ethnographic research on algorithmic systems is “crucial […] to explore how the systemic and the ad hoc coexist and are managed within them” (Gillespie 2016: 27). I want to question why and how certain codes are created and in what way that correlates with the categorization of people, which might result in discriminatory software.

Literature

Bialski, Paula (2019): Mediale Teilhabe. Research project (Abstract). https://mediaandparticipation.com/team/paula-bialski/

Gillespie, Tarleton (2016): Algorithm. In: Peters, Benjamin (ed.): Digital Keywords. A vocabulary of information society and culture. Princeton/Oxford: Princeton UP.

Hepp, Andreas (2016): Kommunikations- und Medienwissenschaft in datengetriebenen Zeiten. In: Publizistik 61: 225-246.

Myers West, Sarah/Whittaker, Meredith/Crawford, Kate (2019): Discriminating Systems. Gender, Race, and Power in AI. https://ainowinstitute.org/discriminatingsystems.html

Neerukonda, Mounika/Chaudhuri, Bidisha (2018): Are technologies (gender-)neutral?: Politics and policies of digital technologies. In: ASCI Journal of Management 47(1): 32-44.