Computational propaganda – friend or foe to South African political party brands

Abstract: 

Due to the ideological nature of politics and the norm of online aggression against them, political figures have begun to fall victim to a form of junk news called “computational propaganda.” Computational propaganda is the use of algorithms and automation to distribute misleading information via social media. Instances of this form of political manipulation have impacted democratic processes in many countries globally. Evidence includes electoral influence through automated disinformation in Brazil (Arnaudo, 2017); France (Farand, 2017) and Germany (Neudert, 2017); in America the Cambridge Analytica project (Greenfield, 2018); in South Africa, the White Monopoly Capital campaign orchestrated by disgraced Bell Pottinger, UK based public relations agency (ANCIR, 2017).

Political parties have invested in digital marketing and social media since Barack Obama’s successful presidential social media campaign in 2008 (Aaker & Chang, 2009), and since then, investment of this nature has increased. With the advent of a ‘post-truth’ world (BBC, 2017), the challenge political parties face is to design or regulate social media that reduces electoral interference. In light of the growing influence of computational propaganda and growing social media users, it is unclear how computational propaganda manifest in South African politics. It is also unclear how social media experts see questions of engagement and algorithmic filtering, freedom of speech, and censorship in the South African political context, which is the focus of this research.

Research question: How do South African social media experts understand computational propaganda in the South African political context?

The research is grounded in critical modernism (Habermas & Gramsci), considering the alteration of the distribution of power and listening to excluded and marginalized voices. While the nature of social media embodies this alteration of all types of power relations and knowledge systems of individuals and institutions, computational propaganda could be a hindrance.

This research involved an exploratory approach using eight semi-structured interviews with social media managers in South Africa, followed by an interview of a South Africa political party representative to contextualize findings. Participants were selected based on purposive, non-probability sampling, specifically snowball sampling.

Using thematic analysis (Flick, 2015), the study focused on bots as the most easily identifiable form of computation propaganda. In South African politics, bots manifest in a multitude of ways for a variety of purposes, such as bots used to leak news, agenda-based bots that reply to the specific subject matter, and extensive sock-puppet networks popularized during the “white monopoly capital” leak disinformation campaigns. Because social media platforms value engagement to sell advertising, it is difficult for them to curb computational propaganda as the primary driver of engagement is algorithms. Regulating the creation and spread of disinformation creates a problem for social media platforms and experts, especially where users curate and distribute fake news, not out of malicious intent, but out of belief. Increased legislation may lead companies to over-regulate their platforms, thereby limiting space for debate, art, politics, and other forms of expression. Such regulation could unintentionally galvanize unwitting arbiters of disinformation, believing that they are the victims of hidden agendas.