scholarly journals A Feature-Based Approach to Assess Hate Speech in User Comments

Author(s):  
Liane Reiners ◽  
Christian Schemer
Author(s):  
Svenja Schäfer ◽  
Michael Sülflow ◽  
Liane Reiners

Abstract. Previous research indicates that user comments serve as exemplars and thus have an effect on perceived public opinion. Moreover, they also shape the attitudes of their readers. However, studies almost exclusively focus on controversial issues if they explore the consequences of user comments for attitudes and perceived public opinion. The current study wants to find out if hate speech attacking social groups due to characteristics such as religion or sexual orientation also has an effect on the way people think about these groups and how they think society perceives them. Moreover, we also investigated the effects of hate speech on prejudiced attitudes. To explore the hypotheses and research questions, we preregistered and conducted a 3 × 2 experimental study varying the amount of hate speech (none/few/many hateful comments) and the group that was attacked (Muslims/homosexuals). Results show no effects of the amount of hate speech on perceived public opinion for both groups. However, if homosexuals are attacked, hate speech negatively affects perceived social cohesion. Moreover, for both groups, we find interaction effects between preexisting attitudes and hate speech for discriminating demands. This indicates that hate speech can increase polarization in society.


2021 ◽  
pp. 289-300
Author(s):  
Petar Pusonja

The paper presents the research findings on the behavior of users of the social network Facebook, in the circumstances of a crisis situation and the declaration of the state of emergency. By combining the media content analysis, modified netnographic approach and pseudo-survey techniques, the author seeks to determine the extent and the manner in which the declaration of the state of emergency in the Republic of Srpska has affected its citizens. The results show that the state of emergency has led to a reduction in the number of events reported, creating uniformity in media content and increasing the degree to which the media rely on official sources of information. On the other hand, the audience shows saturation with such content, completely ignoring it or expressing dissatisfaction with the overall situation, most often sarcastically. The analysis of user comments shows that, although value-neutral, the content focused on government activities provoked mostly negative comments, with hate speech and explicit vulgarism, as well as comments ad hominem, although to a lesser extent.


2021 ◽  
Vol 9 (1) ◽  
pp. 171-180
Author(s):  
Sünje Paasch-Colberg ◽  
Christian Strippel ◽  
Joachim Trebbe ◽  
Martin Emmer

In recent debates on offensive language in participatory online spaces, the term ‘hate speech’ has become especially prominent. Originating from a legal context, the term usually refers to violent threats or expressions of prejudice against particular groups on the basis of race, religion, or sexual orientation. However, due to its explicit reference to the emotion of hate, it is also used more colloquially as a general label for any kind of negative expression. This ambiguity leads to misunderstandings in discussions about hate speech and challenges its identification. To meet this challenge, this article provides a modularized framework to differentiate various forms of hate speech and offensive language. On the basis of this framework, we present a text annotation study of 5,031 user comments on the topic of immigration and refuge posted in March 2019 on three German news sites, four Facebook pages, 13 YouTube channels, and one right-wing blog. An in-depth analysis of these comments identifies various types of hate speech and offensive language targeting immigrants and refugees. By exploring typical combinations of labeled attributes, we empirically map the variety of offensive language in the subject area ranging from insults to calls for hate crimes, going beyond the common ‘hate/no-hate’ dichotomy found in similar studies. The results are discussed with a focus on the grey area between hate speech and offensive language.


2021 ◽  
Vol 11 (18) ◽  
pp. 8575
Author(s):  
Sudhir Kumar Mohapatra ◽  
Srinivas Prasad ◽  
Dwiti Krishna Bebarta ◽  
Tapan Kumar Das ◽  
Kathiravan Srinivasan ◽  
...  

Hate speech on social media may spread quickly through online users and subsequently, may even escalate into local vile violence and heinous crimes. This paper proposes a hate speech detection model by means of machine learning and text mining feature extraction techniques. In this study, the authors collected the hate speech of English-Odia code mixed data from a Facebook public page and manually organized them into three classes. In order to build binary and ternary datasets, the data are further converted into binary classes. The modeling of hate speech employs the combination of a machine learning algorithm and features extraction. Support vector machine (SVM), naïve Bayes (NB) and random forest (RF) models were trained using the whole dataset, with the extracted feature based on word unigram, bigram, trigram, combined n-grams, term frequency-inverse document frequency (TF-IDF), combined n-grams weighted by TF-IDF and word2vec for both the datasets. Using the two datasets, we developed two kinds of models with each feature—binary models and ternary models. The models based on SVM with word2vec achieved better performance than the NB and RF models for both the binary and ternary categories. The result reveals that the ternary models achieved less confusion between hate and non-hate speech than the binary models.


Author(s):  
Katharina Esau

The variable hate speech is an indicator used to describe communication that expresses and/or promotes hatred towards others (Erjavec & Kova?i?, 2012; Rosenfeld, 2012; Ziegele, Koehler, & Weber, 2018). A second element is that hate speech is directed against others on the basis of their ethnic or national origin, religion, gender, disability, sexual orientation or political conviction (Erjavec & Kova?i?, 2012; Rosenfeld, 2012; Waseem & Hovy, 2016) and typically uses terms to denigrate, degrade and threaten others (Döring & Mohseni, 2020; Gagliardone, Gal, Alves, & Martínez, 2015). Hate speech and incivility are often used synonymously as hateful speech is considered part of incivility (Ziegele et al., 2018). Field of application/theoretical foundation: Hate speech (see also incivility) has become an issue of growing concern both in public and academic discourses on user-generated online communication. References/combination with other methods of data collection: Hate speech is examined through content analysis and can be combined with comparative or experimental designs (Muddiman, 2017; Oz, Zheng, & Chen, 2017; Rowe, 2015). In addition, content analyses can be accompanied by interviews or surveys, for example to validate the results of the content analysis (Erjavec & Kova?i?, 2012). Example studies: Research question/research interest: Previous studies have been interested in the extent of hate speech in online communication (e.g. in one specific online discussion, in discussions on a specific topic or discussions on a specific platform or different platforms in comparatively) (Döring & Mohseni, 2020; Poole, Giraud, & Quincey, 2020; Waseem & Hovy, 2016). Object of analysis: Previous studies have investigated hate speech in user comments for example on news websites, social media platforms (e.g. Twitter) and social live streaming services (e.g. YouTube, YouNow). Level of analysis: Most manual content analysis studies measure hate speech on the level of a message, for example on the level of user comments. On a higher level of analysis, the level of hate speech for a whole discussion thread or online platform could be measured or estimated. On a lower level of analysis hate speech can be measured on the level of utterances, sentences or words which are the preferred levels of analysis in automated content analyses. Table 1. Previous manual and automated content analysis studies and measures of hate speech Example study (type of content analysis) Construct Dimensions/variables Explanation/example Reliability Waseem & Hovy (2016) (automated content analysis) hate speech sexist or racial slur - - attack of a minority - - silencing of a minority   - criticizing of a minority without argument or straw man argument - - promotion of hate speech or violent crime - - misrepresentation of truth or seeking to distort views on a minority - - problematic hash tags. e.g. “#BanIslam”, “#whoriental”, “#whitegenocide” - - negative stereotypes of a minority - - defending xenophobia or sexism - - user name that is offensive, as per the previous criteria - -     hate speech - ? = .84 Döring & Mohseni (2020) (manual content analysis) hate speech explicitly or aggressively sexual hate e. g. “are you single, and can I lick you?” ? = .74; PA = .99 racist or sexist hate e.g. “this is why ignorant whores like you belong in the fucking kitchen”, “oh my god that accent sounds like crappy American” ? = .66; PA = .99     hate speech   ? = .70 Note: Previous studies used different inter-coder reliability statistics; ? = Cohen’s Kappa; PA = percentage agreement.   More coded variables with definitions used in the study Döring & Mohseni (2020) are available under: https://osf.io/da8tw/   References Döring, N., & Mohseni, M. R. (2020). Gendered hate speech in YouTube and YouNow comments: Results of two content analyses. SCM Studies in Communication and Media, 9(1), 62–88. https://doi.org/10.5771/2192-4007-2020-1-62 Erjavec, K., & Kova?i?, M. P. (2012). “You Don't Understand, This is a New War! ” Analysis of Hate Speech in News Web Sites' Comments. Mass Communication and Society, 15(6), 899–920. https://doi.org/10.1080/15205436.2011.619679 Gagliardone, I., Gal, D., Alves, T., & Martínez, G. (2015). Countering online hate speech. UNESCO Series on Internet Freedom. Retrieved from http://unesdoc.unesco.org/images/0023/002332/233231e.pdf Muddiman, A. (2017). : Personal and public levels of political incivility. International Journal of Communication, 11, 3182–3202. Oz, M., Zheng, P., & Chen, G. M. (2017). Twitter versus Facebook: Comparing incivility, impoliteness, and deliberative attributes. New Media & Society, 20(9), 3400–3419. https://doi.org/10.1177/1461444817749516 Poole, E., Giraud, E. H., & Quincey, E. de (2020). Tactical interventions in online hate speech: The case of #stopIslam. New Media & Society, 146144482090331. https://doi.org/10.1177/1461444820903319 Rosenfeld, M. (2012). Hate Speech in Constitutional Jurisprudence. In M. Herz & P. Molnar (Eds.), The Content and Context of Hate Speech (pp. 242–289). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139042871.018 Rowe, I. (2015). Civility 2.0: A comparative analysis of incivility in online political discussion. Information, Communication & Society, 18(2), 121–138. https://doi.org/10.1080/1369118X.2014.940365 Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. In J. Andreas, E. Choi, & A. Lazaridou (Chairs), Proceedings of the NAACL Student Research Workshop. Ziegele, M., Koehler, C., & Weber, M. (2018). Socially Destructive? Effects of Negative and Hateful User Comments on Readers’ Donation Behavior toward Refugees and Homeless Persons. Journal of Broadcasting & Electronic Media, 62(4), 636–653. https://doi.org/10.1080/08838151.2018.1532430


2021 ◽  
Author(s):  
Sünje Paasch-Colberg ◽  
Joachim Trebbe ◽  
Christian Strippel ◽  
Martin Emmer

In the past decade, the public discourse on immigration in Germany has been strongly affected by right-wing populist, racist, and Islamophobic positions. This becomes evident especially in the comment sections of news websites and social media platforms, where user discussions often escalate and trigger hate comments against refugees and immigrants and also against journalists, politicians, and other groups. In view of the threatening consequences such sentiments can have for groups who are targeted by right-wing extremist violence, we take a closer look into such user discussions to gain detailed insights into the various forms of hate speech and offensive language against these groups. Using a modularized framework that goes beyond the common “hate/no-hate” dichotomy in the field, we conducted a structured text annotation of 5,031 user comments posted on German news websites and social media in March 2019. Most of the hate speech we found was directed against refugees and immigrants, while other groups were mostly exposed to various forms of offensive language. In comments containing hate speech, refugees and Muslims were frequently stereotyped as criminals, whereas extreme forms of hate speech, such as calls for violence, were rare in our data. These findings are discussed with a focus on their potential consequences for public discourse on immigration in Germany.


2019 ◽  
Vol 18 (4) ◽  
pp. 575-587 ◽  
Author(s):  
Tobias Eberwein

Purpose The idea that user comments on journalistic articles would help to increase the quality of the media has long been greeted with enthusiasm. By now, however, these high hopes have mostly evaporated. Practical experience has shown that user participation does not automatically lead to better journalism but may also result in hate speech and systematic trolling – thus having a dysfunctional impact on journalistic actors. Although empirical journalism research has made it possible to describe various kinds of disruptive follow-up communication on journalistic platforms, it has not yet succeeded in explaining what exactly drives certain users to indulge in flaming and trolling. This paper intends to fill this gap. Design/methodology/approach It does so on the basis of problem-centered interviews with media users who regularly publish negative comments on news websites. Findings The evaluation allows for a nuanced view on current phenomena of dysfunctional follow-up communication on journalistic news sites. It shows that the typical “troll” does not exist. Instead, it seems to be more appropriate to differentiate disruptive commenters according to their varying backgrounds and motives. Quite often, the interviewed users display a distinct political (or other) devotion to a certain cause that rather makes them appear as “warriors of faith.” However, they are united in their dissatisfaction with the quality of the (mass) media, which they attack critically and often with a harsh tone. Originality/value The study reflects these differences by developing a typology of dysfunctional online commenters. By helping to understand their aims and intentions, it contributes to the development of sustainable strategies for stimulating constructive user participation in a post-truth age.


Author(s):  
Marlene Kunst

Abstract. Comments sections under news articles have become popular spaces for audience members to oppose the mainstream media’s perspective on political issues by expressing alternative views. This kind of challenge to mainstream discourses is a necessary element of proper deliberation. However, due to heuristic information processing and the public concern about disinformation online, readers of comments sections may be inherently skeptical about user comments that counter the views of mainstream media. Consequently, commenters with alternative views may participate in discussions from a position of disadvantage because their contributions are scrutinized particularly critically. Nevertheless, this effect has hitherto not been empirically established. To address this gap, a multifactorial, between-subjects experimental study ( N = 166) was conducted that investigated how participants assess the credibility and argument quality of media-dissonant user comments relative to media-congruent user comments. The findings revealed that media-dissonant user comments are, indeed, disadvantaged in online discussions, as they are assessed as less credible and more poorly argued than media-congruent user comments. Moreover, the findings showed that the higher the participants’ level of media trust, the worse the assessment of media-dissonant user comments relative to media-congruent user comments. Normative implications and avenues for future research are discussed.


Sign in / Sign up

Export Citation Format

Share Document