scholarly journals A Framework of Severity for Harmful Content Online

2021 ◽  
Vol 5 (CSCW2) ◽  
pp. 1-33
Author(s):  
Morgan Klaus Scheuerman ◽  
Jialun Aaron Jiang ◽  
Casey Fiesler ◽  
Jed R. Brubaker
Keyword(s):  
Author(s):  
Yana Zemlyanskaya ◽  
Martina Valente ◽  
Elena V. Syurina

AbstractThis mixed-methods study explored the conversation around orthorexia nervosa (ON) on Instagram from a Russian-speaking perspective. Two quantitative data sources were implemented; a comparative content analysis of posts tagged with #opтopeкcия (n = 234) and #orthorexia (n = 243), and an online questionnaire completed by Russian-speakers (n = 96) sharing ON-related content on Instagram. Additionally, five questionnaire participants were interviewed, four of which identified with having (had) ON. Russian-speakers who share ON-related content on Instagram are primarily female, around their late-twenties, and prefer Instagram over other platforms. They describe people with ON as obsessed with correct eating, rather than healthy or clean eating. Instagram appears to have a dual effect; it has the potential to both trigger the onset of ON and encourage recovery. Positive content encourages a healthy relationship with food, promotes intuitive eating, and spread recovery advice. Harmful content, in turn, emphasizes specific diet and beauty ideals. Russian-speaking users mainly post pictures of food, followed by largely informative text that explains what ON is, and what recovery may look like. Their reasons for posting ON-related content are to share personal experiences, support others in recovery, and raise awareness about ON. Two main target audiences were people unaware of ON and people seeking recovery support. The relationship between ON and social media is not strictly limited to the global north. Thus, it may be valuable to further investigate non-English-speaking populations currently underrepresented in ON research.Level of evidence: Level V, descriptive study.


Author(s):  
Gretel Liz De la Peña Sarracén ◽  
Paolo Rosso

AbstractThe proliferation of harmful content on social media affects a large part of the user community. Therefore, several approaches have emerged to control this phenomenon automatically. However, this is still a quite challenging task. In this paper, we explore the offensive language as a particular case of harmful content and focus our study in the analysis of keywords in available datasets composed of offensive tweets. Thus, we aim to identify relevant words in those datasets and analyze how they can affect model learning. For keyword extraction, we propose an unsupervised hybrid approach which combines the multi-head self-attention of BERT and a reasoning on a word graph. The attention mechanism allows to capture relationships among words in a context, while a language model is learned. Then, the relationships are used to generate a graph from what we identify the most relevant words by using the eigenvector centrality. Experiments were performed by means of two mechanisms. On the one hand, we used an information retrieval system to evaluate the impact of the keywords in recovering offensive tweets from a dataset. On the other hand, we evaluated a keyword-based model for offensive language detection. Results highlight some points to consider when training models with available datasets.


Author(s):  
Hanae Kobayashi ◽  
Masashi Kadoguchi ◽  
Shota Hayashi ◽  
Akira Otsuka ◽  
Masaki Hashimoto

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Surender Verma ◽  
Akash Yadav

Abstract Background Acknowledging population control to be an essential step for global health promotes wide research study in the area of male contraception. Although there are a great number of synthetic contraceptives available in the market, they have plenty of adverse effects. Different potential strategies for male contraception were investigated over a long period time consisting of hormonal, chemical, and immunological interventions, although these methods showed good antifertility results with low failure rates relative to condoms. Main text This review is based upon the concept of herbal contraceptives which are an effective method for controlling the fertility of animals and humans. This review has highlighted herbal medicinal plants and plant extracts which have been reported to possess significant antifertility action in males. The review considers those plants which are used traditionally for their spermicidal and antispermatogenic activities and imbalance essential hormones for fertility purposes and plants with reported animal studies as well as some with human studies for antifertility effect along with their doses, chemical constituents, and mechanism of action of the antifertility effect of the plants. This review also explains the phases of sperm formation, hormone production, and the mechanism of male contraceptives. Conclusion As far as the relevance of the current review is discussed, it might be quite useful in generating monographs on plants and recommendations on their use. A lot of the plant species listed here might appear promising as effective alternative oral fertility-regulating agents in males. Therefore, significant research into the chemical and biological properties of such less-explored plants is still needed to determine their contraceptive efficacy and also to possibly define their toxic effects so that these ingredients can be utilized with confidence to regulate male fertility. The new inventions in this field are necessary to concentrate on modern, more potent drugs with less harmful content and that are self-administrable, less costly, and entirely reversible.


2021 ◽  
Vol 11 (24) ◽  
pp. 11684
Author(s):  
Mona Khalifa A. Aljero ◽  
Nazife Dimililer

Detecting harmful content or hate speech on social media is a significant challenge due to the high throughput and large volume of content production on these platforms. Identifying hate speech in a timely manner is crucial in preventing its dissemination. We propose a novel stacked ensemble approach for detecting hate speech in English tweets. The proposed architecture employs an ensemble of three classifiers, namely support vector machine (SVM), logistic regression (LR), and XGBoost classifier (XGB), trained using word2vec and universal encoding features. The meta classifier, LR, combines the outputs of the three base classifiers and the features employed by the base classifiers to produce the final output. It is shown that the proposed architecture improves the performance of the widely used single classifiers as well as the standard stacking and classifier ensemble using majority voting. We also present results on the use of various combinations of machine learning classifiers as base classifiers. The experimental results from the proposed architecture indicated an improvement in the performance on all four datasets compared with the standard stacking, base classifiers, and majority voting. Furthermore, on three of these datasets, the proposed architecture outperformed all state-of-the-art systems.


In this chapter, the authors present an application for Android smartphones to automatically detect possible harmful content in input text. The developed application is aimed to test in practice the performance of the developed cyberbullying detection methods described in previous chapters. The final goal of the developed application will be to help mitigate the problem of cyberbullying by quickly detecting possibly harmful contents in user's entry and warning the user of the possible negative influence. The test application was prepared to use one of two methods for detection of harmful messages: a method inspired by a brute force search algorithm applied to language modelling and a method which uses seed words from three categories to calculate semantic orientation score SO-PMI-IR and then maximize the relevance of categories to specify harmfulness of a message (both methods were described in previous chapters). First tests showed that both methods are working properly under the Android environment.


2019 ◽  
Vol 22 (1) ◽  
pp. 69-80 ◽  
Author(s):  
Stefanie Ullmann ◽  
Marcus Tomalin

Abstract In this paper we explore quarantining as a more ethical method for delimiting the spread of Hate Speech via online social media platforms. Currently, companies like Facebook, Twitter, and Google generally respond reactively to such material: offensive messages that have already been posted are reviewed by human moderators if complaints from users are received. The offensive posts are only subsequently removed if the complaints are upheld; therefore, they still cause the recipients psychological harm. In addition, this approach has frequently been criticised for delimiting freedom of expression, since it requires the service providers to elaborate and implement censorship regimes. In the last few years, an emerging generation of automatic Hate Speech detection systems has started to offer new strategies for dealing with this particular kind of offensive online material. Anticipating the future efficacy of such systems, the present article advocates an approach to online Hate Speech detection that is analogous to the quarantining of malicious computer software. If a given post is automatically classified as being harmful in a reliable manner, then it can be temporarily quarantined, and the direct recipients can receive an alert, which protects them from the harmful content in the first instance. The quarantining framework is an example of more ethical online safety technology that can be extended to the handling of Hate Speech. Crucially, it provides flexible options for obtaining a more justifiable balance between freedom of expression and appropriate censorship.


Author(s):  
Robert Gorwa

Online intermediaries have always been regulated, locked in heated battles around intermediary liability for copyright or privacy reasons (Tusikov, 2016; Gorwa 2019). But a notable trend is the rapidly growing use of policy to try and govern user-generated content with a host of other perceived social or individual harms, such as disinformation, hate speech, and terrorist propaganda (Kaye, 2019; York 2019; Suzor 2019). Even as increasing academic and policy attention is paid to the global ‘techlash’, and leading voices outlining the various ways in which expression online is currently under threat, our understanding of the overall policy landscape remains ad hoc and incomplete. The goal of this paper is thus to present some initial observations on the state of harmful content regulation around the world, drawing upon a new original dataset that seeks to capture the global universe of harmful-content regulatory initiatives for user-generated content online. The first part of the paper presents descriptive results, showing the evolution (and notable increase) in policy development in the past two decades. The second half of the paper provides insight into which specific issue areas have attracted the most formal and informal regulatory arrangements, and assesses the scope (what kind of actors are seen as being a ‘platform,’ and how that is defined), key policy mechanisms (takedown regimes, transparency rules, technical standards), and sanctioning procedures (fines, criminal liability) enacted in these regulations.


Sign in / Sign up

Export Citation Format

Share Document