scholarly journals Insults, criminalization, and calls for violence: Forms of hate speech and offensive language in German user comments on immigration

2021 ◽  
Author(s):  
Sünje Paasch-Colberg ◽  
Joachim Trebbe ◽  
Christian Strippel ◽  
Martin Emmer

In the past decade, the public discourse on immigration in Germany has been strongly affected by right-wing populist, racist, and Islamophobic positions. This becomes evident especially in the comment sections of news websites and social media platforms, where user discussions often escalate and trigger hate comments against refugees and immigrants and also against journalists, politicians, and other groups. In view of the threatening consequences such sentiments can have for groups who are targeted by right-wing extremist violence, we take a closer look into such user discussions to gain detailed insights into the various forms of hate speech and offensive language against these groups. Using a modularized framework that goes beyond the common “hate/no-hate” dichotomy in the field, we conducted a structured text annotation of 5,031 user comments posted on German news websites and social media in March 2019. Most of the hate speech we found was directed against refugees and immigrants, while other groups were mostly exposed to various forms of offensive language. In comments containing hate speech, refugees and Muslims were frequently stereotyped as criminals, whereas extreme forms of hate speech, such as calls for violence, were rare in our data. These findings are discussed with a focus on their potential consequences for public discourse on immigration in Germany.

2021 ◽  
Vol 31 (2) ◽  
pp. 269-276
Author(s):  
Prashanth Bhat

Widespread dissemination of hate speech on corporate social media platforms such as Twitter, Facebook, and YouTube has necessitated technological companies to moderate content on their platforms. At the receiving end of these content moderation efforts are supporters of right-wing populist parties, who have gained notoriety for harassing journalists, spreading disinformation, and vilifying liberal activists. In recent months, several prominent right-wing figures across the world were removed from social media - a phenomenon also known as ‘deplatforming’- for violating platform policies. Prominent among such right-wing groups are online supporters of the Hindu nationalist Bharatiya Janata Party (BJP) in India, who have begun accusing corporate social media of pursuing a ‘liberal agenda’ and ‘curtailing free speech.’ In response to deplatforming, the BJP-led Government of India has aggressively promoted and embraced Koo, an indigenously developed social media platform. This commentary examines the implications of this alternative social platform for the online communicative environment in the Indian public sphere.


Gender Issues ◽  
2020 ◽  
Author(s):  
Zaida Orth ◽  
Michelle Andipatin ◽  
Brian van Wyk

Abstract Sexual assault on campuses has been identified as a pervasive public health problem. In April 2016, students across South African universities launched the #Endrapeculture campaign to express their frustration against university policies which served to perpetuate a rape culture. The use of hashtag activism during the protest served to spark online public debates and mobilize support for the protests. This article describes the public reactions to the South African #Endrapeculture protests on the Facebook social media platform. Data was collected through natural observations of comment threads on news articles and public posts on the student protests, and subjected to content analysis. The findings suggest that the #nakedprotest was successful in initiating public conversations concerning the issue of rape culture. However, the reactions towards the #nakedprotest were divided with some perpetuating a mainstream public discourse which perpetuates rape culture, and others (re)presenting a counter-public that challenged current dominant views about rape culture. Two related main themes emerged: Victim-blaming and Trivialising Rape Culture. Victim-blaming narratives emerged from the commenters and suggested that the protesters were increasing their chances of being sexually assaulted by marching topless. This discourse seems to perpetuate the notion of the aggressive male sexual desire and places the onus on women to protect themselves. Other commenters criticised the #nakedprotest method through demeaning comments which served to derail the conversation and trivialise the message behind the protest. The public reaction to the #nakedprotest demonstrated that rape culture is pervasive in society and continues to be re(produced) through discourse on social media platforms. However, social media also offers individuals the opportunity to draw from and participate in multiple counter-publics which challenge these mainstream rape culture discourses.


Author(s):  
Isa Inuwa-Dutse

Conventional preventive measures during pandemics include social distancing and lockdown. Such measures in the time of social media brought about a new set of challenges – vulnerability to the toxic impact of online misinformation is high. A case in point is COVID-19. As the virus propagates, so does the associated misinformation and fake news about it leading to an infodemic. Since the outbreak, there has been a surge of studies investigating various aspects of the pandemic. Of interest to this chapter are studies centering on datasets from online social media platforms where the bulk of the public discourse happens. The main goal is to support the fight against negative infodemic by (1) contributing a diverse set of curated relevant datasets; (2) offering relevant areas to study using the datasets; and (3) demonstrating how relevant datasets, strategies, and state-of-the-art IT tools can be leveraged in managing the pandemic.


Author(s):  
Lisa-Maria N. Neudert

As concerns over misinformation, political bots, and the impact of social media on public discourse manifest in Germany, this chapter explores the role of computational propaganda in and around German politics. The research sheds light on how algorithms, automation, and big data are leveraged to manipulate the German public, presenting real-time social media data and rich evidence from interviews with a wide range of German Internet experts—bot developers, policymakers, cyberwarfare specialists, victims of automated attacks, and social media moderators. In addition, the chapter examines how the ongoing public debate surrounding the threats of right-wing political currents and foreign election interference in the Federal Election 2017 has created sentiments of concern and fear. Imposed regulation, multi-stakeholder actionism, and sustained media attention remain unsubstantiated by empirical findings of computational propaganda. The chapter provides an in-depth analysis of social media discourse during the German parliamentary election 2016. Pioneering the methodological assessment of the magnitude of automation and junk news, the author finds limited evidence of computational propaganda in Germany. The author concludes that the impact of computational propaganda, nonetheless, is substantial in Germany, promoting a dispersed civic debate, political vigilance, and restrictive countermeasures that leave a deep imprint on the freedom and openness of the public discourse in Germany.


2018 ◽  
Author(s):  
Carsten Schwemmer

This paper investigates how right-wing movements strategically utilize social media for communication with supporters. I argue that movements seek to maximize user activity on social media platforms for increasing on-site mobilization. To examine what factors affect social media activity and how right-wing movements strategically adjust their content, I analyze the German right-wing movement Pegida, which uses Facebook for spreading its anti-Islam agenda and promoting events in the Internet. Data from Pegida’s Facebook page are combined with news reports over a period of 18 months to measure activity on Facebook and in the public sphere simultaneously. Results of quantitative text and time series analysis show that the quantity of posts by Pegida does not increase user activity, but it is the content of posts that matters. Moreover, findings highlight a strong connection between Facebook activities and the public sphere. In times of decreasing public attention, the movement changes its social media strategy in response to exogenous shocks and resorts increasingly to radical mobilization methods.


2021 ◽  
Vol 9 (1) ◽  
pp. 171-180
Author(s):  
Sünje Paasch-Colberg ◽  
Christian Strippel ◽  
Joachim Trebbe ◽  
Martin Emmer

In recent debates on offensive language in participatory online spaces, the term ‘hate speech’ has become especially prominent. Originating from a legal context, the term usually refers to violent threats or expressions of prejudice against particular groups on the basis of race, religion, or sexual orientation. However, due to its explicit reference to the emotion of hate, it is also used more colloquially as a general label for any kind of negative expression. This ambiguity leads to misunderstandings in discussions about hate speech and challenges its identification. To meet this challenge, this article provides a modularized framework to differentiate various forms of hate speech and offensive language. On the basis of this framework, we present a text annotation study of 5,031 user comments on the topic of immigration and refuge posted in March 2019 on three German news sites, four Facebook pages, 13 YouTube channels, and one right-wing blog. An in-depth analysis of these comments identifies various types of hate speech and offensive language targeting immigrants and refugees. By exploring typical combinations of labeled attributes, we empirically map the variety of offensive language in the subject area ranging from insults to calls for hate crimes, going beyond the common ‘hate/no-hate’ dichotomy found in similar studies. The results are discussed with a focus on the grey area between hate speech and offensive language.


Yazykoznaniye ◽  
2021 ◽  
pp. 92-118
Author(s):  
Liliya Komalova ◽  

The paper provides an overview of foreign studies on the implementation of hate speech in the public discourse of institutional and non-institutional media and social media based on the material of Indo-European, Fino-Ugric and Sino-Tibetan language families. The content of the concept «hate speech» is analyzed in the broad sense of this word. The findings reveal thematic, discursive, and cognitive features of hate speech realization, the behavioral characteristics of haters, as well as the groups of people towards whom hate speech is most often targeted.


2021 ◽  
Vol 118 (50) ◽  
pp. e2116310118
Author(s):  
Dominik Hangartner ◽  
Gloria Gennaro ◽  
Sary Alasiri ◽  
Nicholas Bahrich ◽  
Alexandra Bornhoft ◽  
...  

Despite heightened awareness of the detrimental impact of hate speech on social media platforms on affected communities and public discourse, there is little consensus on approaches to mitigate it. While content moderation—either by governments or social media companies—can curb online hostility, such policies may suppress valuable as well as illicit speech and might disperse rather than reduce hate speech. As an alternative strategy, an increasing number of international and nongovernmental organizations (I/NGOs) are employing counterspeech to confront and reduce online hate speech. Despite their growing popularity, there is scant experimental evidence on the effectiveness and design of counterspeech strategies (in the public domain). Modeling our interventions on current I/NGO practice, we randomly assign English-speaking Twitter users who have sent messages containing xenophobic (or racist) hate speech to one of three counterspeech strategies—empathy, warning of consequences, and humor—or a control group. Our intention-to-treat analysis of 1,350 Twitter users shows that empathy-based counterspeech messages can increase the retrospective deletion of xenophobic hate speech by 0.2 SD and reduce the prospective creation of xenophobic hate speech over a 4-wk follow-up period by 0.1 SD. We find, however, no consistent effects for strategies using humor or warning of consequences. Together, these results advance our understanding of the central role of empathy in reducing exclusionary behavior and inform the design of future counterspeech interventions.


AI & Society ◽  
2021 ◽  
Author(s):  
Yishu Mao ◽  
Kristin Shi-Kupfer

AbstractThe societal and ethical implications of artificial intelligence (AI) have sparked discussions among academics, policymakers and the public around the world. What has gone unnoticed so far are the likewise vibrant discussions in China. We analyzed a large sample of discussions about AI ethics on two Chinese social media platforms. Findings suggest that participants were diverse, and included scholars, IT industry actors, journalists, and members of the general public. They addressed a broad range of concerns associated with the application of AI in various fields. Some even gave recommendations on how to tackle these issues. We argue that these discussions are a valuable source for understanding the future trajectory of AI development in China as well as implications for global dialogue on AI governance.


2019 ◽  
Vol 2 (1) ◽  
pp. 17-38 ◽  
Author(s):  
Martin Oliver

Abstract This paper explores the relationship between social media and political rhetoric. Social media platforms are frequently discussed in relation to ‘post-truth’ politics, but it is less clear exactly what their role is in these developments. Specifically, this paper focuses on Twitter as a case, exploring the kinds of rhetoric encouraged or discouraged on this platform. To do this, I will draw on work from infrastructure studies, an area of Science and Technology Studies; and in particular, on Ford and Wajcman’s analysis of the relationships between infrastructure, knowledge claims and politics on Wikipedia. This theoretical analysis will be supplemented with evidence from previous studies and in the public domain, to illustrate the points made. This analysis echoes wider doubts about the credibility of technologically deterministic accounts of technology’s relationship with society, but suggests however that while Twitter may not be the cause of shifts in public discourse, it is implicated in them, in that it both creates new norms for discourse and enables new forms of power and inequality to operate.


Sign in / Sign up

Export Citation Format

Share Document