scholarly journals From Insult to Hate Speech: Mapping Offensive Language in German User Comments on Immigration

2021 ◽  
Vol 9 (1) ◽  
pp. 171-180
Author(s):  
Sünje Paasch-Colberg ◽  
Christian Strippel ◽  
Joachim Trebbe ◽  
Martin Emmer

In recent debates on offensive language in participatory online spaces, the term ‘hate speech’ has become especially prominent. Originating from a legal context, the term usually refers to violent threats or expressions of prejudice against particular groups on the basis of race, religion, or sexual orientation. However, due to its explicit reference to the emotion of hate, it is also used more colloquially as a general label for any kind of negative expression. This ambiguity leads to misunderstandings in discussions about hate speech and challenges its identification. To meet this challenge, this article provides a modularized framework to differentiate various forms of hate speech and offensive language. On the basis of this framework, we present a text annotation study of 5,031 user comments on the topic of immigration and refuge posted in March 2019 on three German news sites, four Facebook pages, 13 YouTube channels, and one right-wing blog. An in-depth analysis of these comments identifies various types of hate speech and offensive language targeting immigrants and refugees. By exploring typical combinations of labeled attributes, we empirically map the variety of offensive language in the subject area ranging from insults to calls for hate crimes, going beyond the common ‘hate/no-hate’ dichotomy found in similar studies. The results are discussed with a focus on the grey area between hate speech and offensive language.

2021 ◽  
Author(s):  
Sünje Paasch-Colberg ◽  
Joachim Trebbe ◽  
Christian Strippel ◽  
Martin Emmer

In the past decade, the public discourse on immigration in Germany has been strongly affected by right-wing populist, racist, and Islamophobic positions. This becomes evident especially in the comment sections of news websites and social media platforms, where user discussions often escalate and trigger hate comments against refugees and immigrants and also against journalists, politicians, and other groups. In view of the threatening consequences such sentiments can have for groups who are targeted by right-wing extremist violence, we take a closer look into such user discussions to gain detailed insights into the various forms of hate speech and offensive language against these groups. Using a modularized framework that goes beyond the common “hate/no-hate” dichotomy in the field, we conducted a structured text annotation of 5,031 user comments posted on German news websites and social media in March 2019. Most of the hate speech we found was directed against refugees and immigrants, while other groups were mostly exposed to various forms of offensive language. In comments containing hate speech, refugees and Muslims were frequently stereotyped as criminals, whereas extreme forms of hate speech, such as calls for violence, were rare in our data. These findings are discussed with a focus on their potential consequences for public discourse on immigration in Germany.


Author(s):  
Svenja Schäfer ◽  
Michael Sülflow ◽  
Liane Reiners

Abstract. Previous research indicates that user comments serve as exemplars and thus have an effect on perceived public opinion. Moreover, they also shape the attitudes of their readers. However, studies almost exclusively focus on controversial issues if they explore the consequences of user comments for attitudes and perceived public opinion. The current study wants to find out if hate speech attacking social groups due to characteristics such as religion or sexual orientation also has an effect on the way people think about these groups and how they think society perceives them. Moreover, we also investigated the effects of hate speech on prejudiced attitudes. To explore the hypotheses and research questions, we preregistered and conducted a 3 × 2 experimental study varying the amount of hate speech (none/few/many hateful comments) and the group that was attacked (Muslims/homosexuals). Results show no effects of the amount of hate speech on perceived public opinion for both groups. However, if homosexuals are attacked, hate speech negatively affects perceived social cohesion. Moreover, for both groups, we find interaction effects between preexisting attitudes and hate speech for discriminating demands. This indicates that hate speech can increase polarization in society.


Author(s):  
Lilit Bekaryan

Social media networking websites have become platforms where users can not only share their photos, moments of happiness, success stories and best practices, but can also voice their criticism, discontent and negative emotions. It is interesting to follow how something that starts as a mere disagreement or conflict over clashing interests or values can develop into a hateful exchange on Facebook that targets social media users based on their gender, religious belonging, ethnicity, sexual orientation, political convictions etc. The present research explores how hateful posts and comments can start among Facebook users, and studies the language means employed in their design. The factual material was retrieved from more than ten open Facebook pages managed by popular Armenian figures, such as media experts, journalists, politicians and bloggers, in the period 2018–2020. The analysis of hate speech samples extracted from these sources shows that hate speech can find its explicit and implicit reflection in the online communication of Armenian Facebook users, and can be characterised by contextual markers such as invisibility, incitement to violence, invectiveness and immediacy. The language analysis of the posts and comments comprising hate speech has helped to identify language features of hateful comments including informal tone, use of passive voice, abusive and derogatory words, rhetorical or indirectly formed questions, generalisations and labelling.


Author(s):  
Vildan Mercan ◽  
Akhtar Jamil ◽  
Alaa Ali Hameed ◽  
Irfan Ahmed Magsi ◽  
Sibghatullah Bazai ◽  
...  

Author(s):  
Eleonora Esposito ◽  
Sole Alba Zollo

Abstract On the occasion of the 2017 UK election campaign, Amnesty International conducted a large-scale, sentiment-based analysis of online hate speech against women MPs on Twitter (Dhrodia 2018), identifying the “Top 5” most attacked women MPs as Diane Abbott, Joanna Cherry, Emily Thornberry, Jess Phillips and Anna Soubry. Taking Amnesty International’s results as a starting point, this paper investigates online misogyny against the “Top 5” women MPs, with a specific focus on the video-sharing platform YouΤube, whose loosely censored cyberspace is known as a breeding ground for antagonism, impunity and disinhibition (Pihlaja 2014), and, therefore, merits investigation. By collecting and analysing a corpus of YouTube multimodal data we explore, critique and contextualize online misogyny as a techno-social phenomenon applying a Social Media Critical Discourse Studies (SM-CDS) approach (KhosraviNik and Esposito 2018). Mapping a vast array of discursive strategies, this study offers an in-depth analysis on how technology-facilitated gender-based violence contributes to discursively constructing the political arena as a fundamentally male-oriented space, and reinforces stereotypical and sexist representation of women in politics and beyond.


2020 ◽  
Vol 9 (4) ◽  
pp. 540-572
Author(s):  
Nadine Keller ◽  
Tina Askanius

An increasingly organized culture of hate is flourishing in today’s online spaces, posing a serious challenge for democratic societies. Our study seeks to unravel the workings of online hate on popular social media and assess the practices, potentialities, and limitations of organized counterspeech to stymie the spread of hate online. This article is based on a case study of an organized “troll army” of online hate speech in Germany, Reconquista Germanica, and the counterspeech initiative Reconquista Internet. Conducting a qualitative content analysis, we first unpack the strategies and stated intentions behind organized hate speech and counterspeech groups as articulated in their internal strategic documents. We then explore how and to what extent such strategies take shape in online media practices, focusing on the interplay between users spreading hate and users counterspeaking in the comment sections of German news articles on Facebook. The analysis draws on a multi-dimensional framework for studying social media engagement (Uldam & Kaun, 2019) with a focus on practices and discourses and turns to Mouffe’s (2005) concepts of political antagonism and agonism to operationalize and deepen the discursive dimension. The study shows that the interactions between the two opposing camps are highly moralized, reflecting a post-political antagonistic battle between “good” and “evil” and showing limited signs of the potentials of counterspeech to foster productive agonism. The empirical data indicates that despite the promising intentions of rule-guided counterspeech, the counter efforts identified and scrutinized in this study predominantly fail to adhere to civic and moral standards and thus only spur on the destructive dynamics of digital hate culture.


Sign in / Sign up

Export Citation Format

Share Document