scholarly journals Policing harmful content on social media platforms

2021 ◽  
Vol 69 (6. ksz.) ◽  
pp. 26-38
Author(s):  
Boglárka Meggyesfalvi

Social media content moderation is an important area to explore, as the number of users and the amount of content are rapidly increasing every year. As an effect of the COVID19 pandemic, people of all ages around the world spend proportionately more time online. While the internet undeniably brings many benefits, the need for effective online policing is even greater now, as the risk of exposure to harmful content grows. In this paper, the aim is to understand the context of how harmful content - such as posts containing child sexual abuse material, terrorist propaganda or explicit violence - is policed online on social media platforms, and how it could be improved. It is intended in this assessment to outline the difficulties in defining and regulating the growing amount of harmful content online, which includes looking at relevant current legal frameworks at development. It is noted that the subjectivity and complexity in moderating content online will remain by the very nature of the subject. It is discussed and critically analysed whose responsibility managing toxic online content should be. It is argued that an environment in which all stakeholders (including supranational organisations, states, law enforcement agencies, companies and users) maximise their participation, and cooperation should be created in order to effectively ensure online safety. Acknowledging the critical role human content moderators play in keeping social media platforms safe online spaces, consideration about their working conditions are raised. They are essential stakeholders in policing (legal and illegal) harmful content; therefore, they have to be treated better for humanistic and practical reasons. Recommendations are outlined such as trying to prevent harmful content from entering social media platforms in the first place, providing moderators better access to mental health support, and using more available technological tools.

2020 ◽  
Author(s):  
Piper Vornholt ◽  
Munmun De Choudhury

BACKGROUND Mental illness is a growing concern within many college campuses. Limited access to therapy resources, along with the fear of stigma, often prevents students from seeking help. Introducing supportive interventions, coping strategies, and mitigation programs might decrease the negative effects of mental illness among college students. OBJECTIVE Many college students find social support for a variety of needs through social media platforms. With the pervasive adoption of social media sites in college populations, in this study, we examine whether and how these platforms may help meet college students’ mental health needs. METHODS We first conducted a survey among 101 students, followed by semistructured interviews (n=11), of a large public university in the southeast region of the United States to understand whether, to what extent, and how students appropriate social media platforms to suit their struggle with mental health concerns. The interviews were intended to provide comprehensive information on students’ attitudes and their perceived benefits and limitations of social media as platforms for mental health support. RESULTS Our survey revealed that a large number of participating students (71/101, 70.3%) had recently experienced some form of stress, anxiety, or other mental health challenges related to college life. Half of them (52/101, 51.5%) also reported having appropriated some social media platforms for self-disclosure or help, indicating the pervasiveness of this practice. Through our interviews, we obtained deeper insights into these initial observations. We identified specific academic, personal, and social life stressors; motivations behind social media use for mental health needs; and specific platform affordances that helped or hindered this use. CONCLUSIONS Students recognized the benefits of social media in helping connect with peers on campus and promoting informal and candid disclosures. However, they argued against complete anonymity in platforms for mental health help and advocated the need for privacy and boundary regulation mechanisms in social media platforms supporting this use. Our findings bear implications for informing campus counseling efforts and in designing social media–based mental health support tools for college students.


AJIL Unbound ◽  
2018 ◽  
Vol 112 ◽  
pp. 329-333
Author(s):  
Molly K. Land

Using the example of harmful speech online, this essay argues that duties to others—a core component of our humanness—require us to consider the impact our speech has on those who hear it. The widening availability of tools for sharing information and the rise of social media have opened up new avenues for individuals to communicate without the need for journalistic intermediaries. While this presents considerable opportunities for expression, it also means that there are fewer filters in place to manage the harmful effects of speech. Moreover, the structure of online spaces and the uneven legal frameworks that regulate them have exacerbated the effects of harmful speech, allowing mob behavior, harassment, and virtual violence, particularly against minority populations and other vulnerable groups.


2021 ◽  
Vol 3 ◽  
Author(s):  
Claudette Pretorius ◽  
David Coyle

Young adulthood represents a sensitive period for young people's mental health. The lockdown restrictions associated with the COVID-19 pandemic have reduced young people's access to traditional sources of mental health support. This exploratory study aimed to investigate the online resources young people were using to support their mental health during the first lockdown period in Ireland. It made use of an anonymous online survey targeted at young people aged 18–25. Participants were recruited using ads on social media including Facebook, Twitter, Instagram, and SnapChat. A total of 393 respondents completed the survey. Many of the respondents indicated that they were using social media (51.4%, 202/393) and mental health apps (32.6%, 128/393) as sources of support. Fewer were making use of formal online resources such as charities (26%, 102/393) or professional counseling services (13.2%, 52/393). Different social media platforms were used for different purposes; Facebook was used for support groups whilst Instagram was used to engage with influencers who focused on mental health issues. Google search, recommendations from peers and prior knowledge of services played a role in how resources were located. Findings from this survey indicate that digital technologies and online resources have an important role to play in supporting young people's mental health. The COVID-19 pandemic has highlighted these digital tool's potential as well as how they can be improved to better meet young people's needs.


2019 ◽  
Vol 46 (2_suppl) ◽  
pp. 124S-128S ◽  
Author(s):  
Robert S. Gold ◽  
M. Elaine Auld ◽  
Lorien C. Abroms ◽  
Joseph Smyser ◽  
Elad Yom-Tov ◽  
...  

Despite widespread use of the Internet and social media platforms by the public, there has been little organized exchange of information among the academic, government, and technology sectors about how digital communication technologies can be maximized to improve public health. The second Digital Health Promotion Executive Leadership Summit convened some of the world’s leading thinkers from across these sectors to revisit how communication technology and the evolving social media platforms can be utilized to improve both individual and population health. The Summit focused on digital intelligence, the spread of misinformation, online patient communities, censorship in social media, and emerging global legal frameworks. In addition, Summit participants had an opportunity to review the original “Common Agenda” that emerged and was published after the inaugural Summit and recommend updates regarding the uses of digital technology for advancing the goals of public health. This article reports the outcomes of the Summit discussions and presents the updates that were recommended by Summit participants as the Digital Health Communication Common Agenda 2.0. Several of the assertions underlying the original Common Agenda have been modified, and several new assertions have been added to reflect the recommendations. In addition, a corresponding set of principles and related actions—including a recommendation that an interagency panel of the U.S. Department of Health and Human Services be established to focus on digital health communication, with particular attention to social media—have been modified or supplemented.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Dino A. Villegas ◽  
Alejandra Marin Marin

Purpose This paper aims to explore different strategies used by brands to target the Hispanic market via social media from the lens of the Spanish language in a multicultural country like the USA. Design/methodology/approach This study uses a netnographic approach by drawing information from a study of the Facebook pages of 11 brands belonging to different industries. Findings Companies engage in four levels of cultural identity adaptation using different strategies based on ethnicity: language adaptation, identity elements, identity matching and Latino persona. The study also shows that merely translating Facebook pages do not generate high levels of communitarian interaction. Practical implications This study examines different strategies used by brands in the USA to target the Hispanic audience on social media to provide insights for brand managers to develop online engagement. Originality/value With the increase in cultural diversity in different countries and the rise of social media platforms, brand researchers need to better understand how cultural identity permeates marketing strategies in online spaces. Social media platforms such as Facebook offer flexible environments where strategies beyond product- and brand-related aspects can be used. This study extends the literature by showing the heterogeneity of cultural identity-based strategies used by companies to ensure customer engagement and brand loyalty and the impact of such strategies on users.


Author(s):  
Isha Y. Agarwal ◽  
Dipti P. Rana ◽  
Devanshi Bhatia ◽  
Jay Rathod ◽  
Kaneesha J. Gandhi ◽  
...  

Social media has completely transformed the way people communicate. However, every revolution brings with it some negative impacts. Due to its popularity amongst tons of global users, these platforms have a huge volume of data. The ease of access with minimal verification of new users on social media has led to the creation of the bot accounts used to collect private data, spread false and harmful content, and also poses many security threats. A lot of concerns have been raised with the increment in the quantity of bot accounts on different social media platforms. Also there is a high imbalance between bot and non-bot accounts where the imbalance is a result of 'normal behavior' of bot users. The research aims at identifying the artificial bots accounts on Twitter using various machine learning algorithms and content-based classification based on features provided on the platform and recent tweets of users respectively.


Author(s):  
Scott Burnett ◽  
Fotini P. Moura Trancoso

Social media platforms are under increasing pressure to counter racist and other extremist discourses online. The perceived "independence" of platforms such as YouTube has attracted AltRight "micro-celebrities" (Lewis, 2020) that build alternative networks of influence. This paper examines how the discourses of one online AltRight "manfluencer" responds to tightening controls over allowable speech. We present analysis of the YouTube channel of the Swedish far right bodybuilder and motivational speaker Marcus Follin, or "The Golden One". His specific approach to politics includes fitspiration, motivational speaking, and other kinds of neoliberal technologies of the self that in his ideology come together as a call to defend white motherlands and join hands between European nations to fight against globalism and multiculturalism. Through conducting post-foundational discourse analysis of a corpus of 40 videos, we identify three prominent strategies that he uses to respond to increased control of online spaces. The first is to increase cultural encryption, constructing social media as territories in a “metapolitical” war will be won culturally. The second is partial articulation, where he stays focused on positive messages, and his ideology is explained as being about love, not hate. The third is migration, diversification, and new platform-specific foci, through which he finds new and ‘independent’ online spaces and builds new audiences. We conclude that we need more nuanced understandings of how far right ideologies might thrive and build resilience in response to pressure on their speech.


First Monday ◽  
2018 ◽  
Author(s):  
Sarah T. Roberts

The late 2016 case of the Facebook content moderation controversy over the infamous Vietnam-era photo, “The Terror of War,” is examined in this paper for both its specifics, as well as a mechanism to engage in a larger discussion of the politics and economics of the content moderation of user-generated content. In the context of mainstream commercial social media platforms, obfuscation and secrecy work together to form an operating logic of opacity, a term and concept introduced in this paper. The lack of clarity around platform policies, procedures and the values that inform them lead users to wildly different interpretations of the user experience on the same site, resulting in confusion in no small part by the platforms’ own design. Platforms operationalize their content moderation practices under a complex web of nebulous rules and procedural opacity, while governments and other actors clamor for tighter controls on some material, and other members of civil society demand greater freedoms for online expression. Few parties acknowledge the fact that mainstream social media platforms are already highly regulated, albeit rarely in such a way that can be satisfactory to all. The final turn in the paper connects the functions of the commercial content moderation process on social media platforms like Facebook to their output, being either the content that appears on a site, or content that is rescinded: digital detritus. While meaning and intent of user-generated content may often be imagined to be the most important factors by which content is evaluated for a site, this paper argues that its value to the platform as a potentially revenue-generating commodity is actually the key criterion and the one to which all moderation decisions are ultimately reduced. The result is commercialized online spaces that have far less to offer in terms of political and democratic challenge to the status quo and which, in fact, may serve to reify and consolidate power rather than confront it.


Author(s):  
Caitlin Cosper

Abstract Interactions on social media platforms are becoming increasingly relevant from an identity construction perspective. Conflict speech, in particular, is a form of interaction that is especially common in online spaces and constructs identity through polarization, strengthening the in-group while deemphasizing the out-group. The young adult feminist identity has established a strong presence in online spaces, specifically the microblogging platform Tumblr. This study seeks to analyze the role of conflict speech in young adult feminist identity construction through focusing on recontextualization of comments and name-calling strategies. Within this analysis, it is possible to determine the importance of conflict speech as it strengthens the collective feminist identity while allowing those in the in-group to exclude and dismiss conflictual comments stemming from those in the out-group.


2019 ◽  
Vol 2019 ◽  
Author(s):  
Luke Stark ◽  
Jesse Hoey

Computational analyses of data pertaining to human emotional expression have a surprisingly long history and an increasingly critical role in social machine learning (ML) and artificial intelligence (AI) applications. Contemporary, quotidian, narrow AI/ML technologies are most frequently used by social media platforms for modeling and predicting human emotional expression as signals of interpersonal interaction and personal preference. Yet while the ethical and social impacts of ML/AI systems have of late become major topics of both public discussion and academic debate , the ethical dimensions of AI/ML analytics for emotional expression have been under-theorized in these conversations. In this paper, we connect contemporary technical methods for analyzing emotional expression via AI/ML with extant problems in the ethics of AI discourse, in doing so highlighting tensions within that broader discourse and implications for the application of emotion analysis in practice.


Sign in / Sign up

Export Citation Format

Share Document