scholarly journals Empathy-based counterspeech can reduce racist hate speech in a social media field experiment

2021 ◽  
Vol 118 (50) ◽  
pp. e2116310118
Author(s):  
Dominik Hangartner ◽  
Gloria Gennaro ◽  
Sary Alasiri ◽  
Nicholas Bahrich ◽  
Alexandra Bornhoft ◽  
...  

Despite heightened awareness of the detrimental impact of hate speech on social media platforms on affected communities and public discourse, there is little consensus on approaches to mitigate it. While content moderation—either by governments or social media companies—can curb online hostility, such policies may suppress valuable as well as illicit speech and might disperse rather than reduce hate speech. As an alternative strategy, an increasing number of international and nongovernmental organizations (I/NGOs) are employing counterspeech to confront and reduce online hate speech. Despite their growing popularity, there is scant experimental evidence on the effectiveness and design of counterspeech strategies (in the public domain). Modeling our interventions on current I/NGO practice, we randomly assign English-speaking Twitter users who have sent messages containing xenophobic (or racist) hate speech to one of three counterspeech strategies—empathy, warning of consequences, and humor—or a control group. Our intention-to-treat analysis of 1,350 Twitter users shows that empathy-based counterspeech messages can increase the retrospective deletion of xenophobic hate speech by 0.2 SD and reduce the prospective creation of xenophobic hate speech over a 4-wk follow-up period by 0.1 SD. We find, however, no consistent effects for strategies using humor or warning of consequences. Together, these results advance our understanding of the central role of empathy in reducing exclusionary behavior and inform the design of future counterspeech interventions.

2020 ◽  
Author(s):  
Aleksandra Urman ◽  
Stefania Ionescu ◽  
David Garcia ◽  
Anikó Hannák

BACKGROUND Since the beginning of the COVID-19 pandemic, scientists have been willing to share their results quickly to speed up the development of potential treatments and/or a vaccine. At the same time, traditional peer-review-based publication systems are not always able to process new research promptly. This has contributed to a surge in the number of medical preprints published since January 2020. In the absence of a vaccine, preventative measures such as social distancing are most helpful in slowing the spread of COVID-19. Their effectiveness can be undermined if the public does not comply with them. Hence, public discourse can have a direct effect on the progression of the pandemic. Research shows that social media discussions on COVID-19 are driven mainly by the findings from preprints, not peer-reviewed papers, highlighting the need to examine the ways medical preprints are shared and discussed online. OBJECTIVE We examine the patterns of medRxiv preprint sharing on Twitter to establish (1) whether the number of tweets linking to medRxiv increased with the advent of the COVID-19 pandemic; (2) which medical preprints were mentioned on Twitter most often; (3) whether medRxiv sharing patterns on Twitter exhibit political partisanship; (4) whether the discourse surrounding medical preprints among Twitter users has changed throughout the pandemic. METHODS The analysis is based on tweets (n=557,405) containing links to medRxriv preprint repository that were posted between the creation of the repository in June 2019 and June 2020. The study relies on a combination of statistical techniques and text analysis methods. RESULTS Since January 2020, the number of tweets linking to medRxiv has increased drastically, peaking in April 2020 with a subsequent cool-down. Before the pandemic, preprints were shared predominantly by users we identify as medical professionals and scientists. After January 2020, other users, including politically-engaged ones, have started increasingly tweeting about medRxiv. Our findings indicate a political divide in sharing patterns of the top-10 most-tweeted preprints. All of them were shared more frequently by users who describe themselves as Republicans than by users who describe themselves as Democrats. Finally, we observe a change in the discourse around medRxiv preprints. Pre-pandemic tweets linking to them were predominantly using the word “preprint”. In February 2020 “preprint” was taken over by the word “study”. Our analysis suggests this change is at least partially driven by politically-engaged users. Widely shared medical preprints can have a direct effect on the public discourse around COVID-19, which in turn can affect the societies’ willingness to comply with preventative measures. This calls for an increased responsibility when dealing with medical preprints from all parties involved: scientists, preprint repositories, media, politicians, and social media companies. CONCLUSIONS Widely shared medical preprints can have a direct effect on the public discourse around COVID-19, which in turn can affect the societies’ willingness to comply with preventative measures. This calls for an increased responsibility when dealing with medical preprints from all parties involved: scientists, preprint repositories, media, politicians, and social media companies.


Author(s):  
Michael Bossetta

State-sponsored “bad actors” increasingly weaponize social media platforms to launch cyberattacks and disinformation campaigns during elections. Social media companies, due to their rapid growth and scale, struggle to prevent the weaponization of their platforms. This study conducts an automated spear phishing and disinformation campaign on Twitter ahead of the 2018 United States midterm elections. A fake news bot account — the @DCNewsReport — was created and programmed to automatically send customized tweets with a “breaking news” link to 138 Twitter users, before being restricted by Twitter.Overall, one in five users clicked the link, which could have potentially led to the downloading of ransomware or the theft of private information. However, the link in this experiment was non-malicious and redirected users to a Google Forms survey. In predicting users’ likelihood to click the link on Twitter, no statistically significant differences were observed between right-wing and left-wing partisans, or between Web users and mobile users. The findings signal that politically expressive Americans on Twitter, regardless of their party preferences or the devices they use to access the platform, are at risk of being spear phished on social media.


Author(s):  
Soraya Chemaly

The toxicity of online interactions presents unprecedented challenges to traditional free speech norms. The scope and amplification properties of the internet give new dimension and power to hate speech, rape and death threats, and denigrating and reputation-destroying commentary. Social media companies and internet platforms, all of which regulate speech through moderation processes every day, walk the fine line between censorship and free speech with every decision they make, and they make millions a day. This chapter will explore how a lack of diversity in the tech industry affects the design and regulation of products and, in so doing, disproportionately negatively affects the free speech of traditionally marginalized people. During the past year there has been an explosion of research about, and public interest in, the tech industry’s persistent diversity problems. At the same time, the pervasiveness of online hate, harassment, and abuse has become evident. These problems come together on social media platforms that have institutionalized and automated the perspectives of privileged male experiences of speech and violence. The tech sector’s male dominance and the sex segregation and hierarchies of its workforce result in serious and harmful effects globally on women’s safety and free expression.


Subject Advertising on social media. Significance There is growing alignment between regulatory pressure on social media companies to suppress fake accounts and the firms' commercial interest in attracting advertisers. Advertisers, who provide the bulk of social media platforms’ revenue, are beginning to question whether they are getting value for money when their advertising budget is spent on fake clicks. Impacts Action against fake activity on social media will cause a short-term dip in the firms’ share price. Demand will rise for 'influencers' who can show their following consists of genuine users. Some advertisers will distance themselves from social media due to the latter’s failures on tackling hate speech and polarisation.


2021 ◽  
Author(s):  
Sünje Paasch-Colberg ◽  
Joachim Trebbe ◽  
Christian Strippel ◽  
Martin Emmer

In the past decade, the public discourse on immigration in Germany has been strongly affected by right-wing populist, racist, and Islamophobic positions. This becomes evident especially in the comment sections of news websites and social media platforms, where user discussions often escalate and trigger hate comments against refugees and immigrants and also against journalists, politicians, and other groups. In view of the threatening consequences such sentiments can have for groups who are targeted by right-wing extremist violence, we take a closer look into such user discussions to gain detailed insights into the various forms of hate speech and offensive language against these groups. Using a modularized framework that goes beyond the common “hate/no-hate” dichotomy in the field, we conducted a structured text annotation of 5,031 user comments posted on German news websites and social media in March 2019. Most of the hate speech we found was directed against refugees and immigrants, while other groups were mostly exposed to various forms of offensive language. In comments containing hate speech, refugees and Muslims were frequently stereotyped as criminals, whereas extreme forms of hate speech, such as calls for violence, were rare in our data. These findings are discussed with a focus on their potential consequences for public discourse on immigration in Germany.


2020 ◽  
Author(s):  
Ethan Kaji ◽  
Maggie Bushman

BACKGROUND Adolescents with depression often turn to social media to express their feelings, for support, and for educational purposes. Little is known about how Reddit, a forum-based platform, compares to Twitter, a newsfeed platform, when it comes to content surrounding depression. OBJECTIVE The purpose of this study is to identify differences between Reddit and Twitter concerning how depression is discussed and represented online. METHODS A content analysis of Reddit posts and Twitter posts, using r/depression and #depression, identified signs of depression using the DSM-IV criteria. Other youth-related topics, including School, Family, and Social Activity, and the presence of medical or promotional content were also coded for. Relative frequency of each code was then compared between platforms as well as the average DSM-IV score for each platform. RESULTS A total of 102 posts were included in this study, with 53 Reddit posts and 49 Twitter posts. Findings suggest that Reddit has more content with signs of depression with 92% than Twitter with 24%. 28.3% of Reddit posts included medical content compared to Twitter with 18.4%. 53.1% of Twitter posts had promotional content while Reddit posts didn’t contain promotional content. CONCLUSIONS Users with depression seem more willing to discuss their mental health on the subreddit r/depression than on Twitter. Twitter users also use #depression with a wider variety of topics, not all of which actually involve a case of depression.


2021 ◽  
pp. 016344372110158
Author(s):  
Opeyemi Akanbi

Moving beyond the current focus on the individual as the unit of analysis in the privacy paradox, this article examines the misalignment between privacy attitudes and online behaviors at the level of society as a collective. I draw on Facebook’s market performance to show how despite concerns about privacy, market structures drive user, advertiser and investor behaviors to continue to reward corporate owners of social media platforms. In this market-oriented analysis, I introduce the metaphor of elasticity to capture the responsiveness of demand for social media to the data (price) charged by social media companies. Overall, this article positions social media as inelastic, relative to privacy costs; highlights the role of the social collective in the privacy crises; and ultimately underscores the need for structural interventions in addressing privacy risks.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1332
Author(s):  
Hong Fan ◽  
Wu Du ◽  
Abdelghani Dahou ◽  
Ahmed A. Ewees ◽  
Dalia Yousri ◽  
...  

Social media has become an essential facet of modern society, wherein people share their opinions on a wide variety of topics. Social media is quickly becoming indispensable for a majority of people, and many cases of social media addiction have been documented. Social media platforms such as Twitter have demonstrated over the years the value they provide, such as connecting people from all over the world with different backgrounds. However, they have also shown harmful side effects that can have serious consequences. One such harmful side effect of social media is the immense toxicity that can be found in various discussions. The word toxic has become synonymous with online hate speech, internet trolling, and sometimes outrage culture. In this study, we build an efficient model to detect and classify toxicity in social media from user-generated content using the Bidirectional Encoder Representations from Transformers (BERT). The BERT pre-trained model and three of its variants has been fine-tuned on a well-known labeled toxic comment dataset, Kaggle public dataset (Toxic Comment Classification Challenge). Moreover, we test the proposed models with two datasets collected from Twitter from two different periods to detect toxicity in user-generated content (tweets) using hashtages belonging to the UK Brexit. The results showed that the proposed model can efficiently classify and analyze toxic tweets.


Significance The new rules follow a stand-off between Twitter and the central government last month over some posts and accounts. The government has used this stand-off as an opportunity not only to tighten rules governing social media, including Twitter, WhatsApp, Facebook and LinkedIn, but also those for other digital service providers including news publishers and entertainment streaming companies. Impacts Government moves against dominant social media platforms will boost the appeal of smaller platforms with light or no content moderation. Hate speech and harmful disinformation are especially hard to control and curb on smaller platforms. The new rules will have a chilling effect on online public discourse, increasing self-censorship (at the very least). Government action against online news media would undercut fundamental democratic freedoms and the right to dissent. Since US-based companies dominate key segments of the Indian digital market, India’s restrictive rules could mar India-US ties.


2019 ◽  
pp. 203
Author(s):  
Kent Roach

It is argued that neither the approach taken to terrorist speech in Bill C-51 nor Bill C-59 is satisfactory. A case study of the Othman Hamdan case, including his calls on the Internet for “lone wolves” “swiftly to activate,” is featured, along with the use of immigration law after his acquittal for counselling murder and other crimes. Hamdan’s acquittal suggests that the new Bill C-59 terrorist speech offence and take-down powers based on counselling terrorism offences without specifying a particular terrorism offence may not reach Hamdan’s Internet postings. One coherent response would be to repeal terrorist speech offences while making greater use of court-ordered take-downs of speech on the Internet and programs to counter violent extremism. Another coherent response would be to criminalize the promotion and advocacy of terrorist activities (as opposed to terrorist offences in general in Bill C-51 or terrorism offences without identifying a specific terrorist offence in Bill C-59) and provide for defences designed to protect fundamental freedoms such as those under section 319(3) of the Criminal Code that apply to hate speech. Unfortunately, neither Bill C-51 nor Bill C-59 pursues either of these options. The result is that speech such as Hamdan’s will continue to be subject to the vagaries of take-downs by social media companies and immigration law.


Sign in / Sign up

Export Citation Format

Share Document