Bot visibility and authenticity: Building mediated conversations for inclusion

Abstract: 

Bots are automated applications that create and distribute content across social media applications (Ford & Hutchinson, 2019). Lokot and Diakopoulos (2015, p.683) suggest bots within the news environment are those that are “automated accounts that participate in news and information dissemination on social networking platforms”. Regardless of the social or information arena in which bots operate, they contribute to our algorithmic lifestyles: ‘the sorting, classifying, and hierarchizing of people, places, objects and ideas’ (Striphas, 2015, p.395). In this frame, bots are more than merely conversation starters, but are instead relationship creators between users across their information platforms, that also have the potential to significantly skew a discourse by impacting content visibility. As one example within this emerging field of literature, Liu (2019) found that one-third of user-generated content on Twitter is bot-created to manipulate consumer sentiment about brands, with rich media tweets (e.g. video and images). Alternatively, bots can potentially strengthen social media conversations (Rathnayake and Suthers, 2019): through inclusiveness, respect and reciprocity. The impact of these sorts of revelations strengthen claims to incorporate more user/creator-influence on automated media production processes, for example through human/machine relationships as demonstrated by bots. As Jones (2015) argues, there needs to be a stronger relationship between humans and bots for positive relationship building: one that enables bots to operate in useful ways without compounding our current communication dilemmas such as disinformation, the rise of hate-speech and the production of banal content.

Developing new and open methods that combine humanities, social science and data science to understand the proliferation of social bots is crucial for a vast number of scholars working in the communication field. This is especially critical for those researching cultural production, news and journalism, and those scholars concerned with the role information communication technologies play within the human communication process.

This paper performs three functions. First, we highlight a new methodological approach to identifying bots in social media to highlight automated social media conversation within the macrostructure of a national or language-based Twittersphere. Second, we compare the Twitterspheres of Australia and Germany to highlight the most likely areas where bot automation occurs. Third, we map and measure the impact this automation has on particular Twitter arenas, for example the comparisons between German and Australian YouTuber and marketing Twitter conversations. Finally, we highlight potential areas and practices where bot-automation across social media can be strengthened for social good conversation such as inclusiveness, respect and reciprocity.