harmful content
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 45)

H-INDEX

5
(FIVE YEARS 1)

Suicidologi ◽  
2021 ◽  
Vol 26 (2) ◽  
Author(s):  
Ruth Benson ◽  
Niall McTernan ◽  
Fenella Ryan ◽  
Ella Arensman

Internationally, there are indications of an increasing trend in suicide contagion and clustering, which has been associated with contemporary communication technology and continuous communication across jurisdictions. Research has indicated varying effects related to different types of media and media contents in terms of impacts on suicidal behaviour. A comprehensive literature search was conducted into research addressing different types of media and media contents and the impact on suicide contagion and clustering, covering January 2003 - February 2021. Across the 41 selected studies, we identified consistency in terms of both increased quantity of media reports and portrayal of specific details of suicide cases, including celebrities and fictional cases, to be significantly associated with suicide contagion and increased suicide rates or mass clusters, with significant impacts on increased risk of suicide contagion within the first days up to the first three monthsfollowing the media coverage. The impact of potentially harmful content and the portrayal of suicide and self-harm via internet sites and social media on suicide contagion and clustering was largely consistent with research into impacts involving traditional media. The findings underline the need to prioritise implementation and adherence to media guidelines for reporting suicide for media professionals, online and social media outlets.  


2021 ◽  
Vol 11 (24) ◽  
pp. 11684
Author(s):  
Mona Khalifa A. Aljero ◽  
Nazife Dimililer

Detecting harmful content or hate speech on social media is a significant challenge due to the high throughput and large volume of content production on these platforms. Identifying hate speech in a timely manner is crucial in preventing its dissemination. We propose a novel stacked ensemble approach for detecting hate speech in English tweets. The proposed architecture employs an ensemble of three classifiers, namely support vector machine (SVM), logistic regression (LR), and XGBoost classifier (XGB), trained using word2vec and universal encoding features. The meta classifier, LR, combines the outputs of the three base classifiers and the features employed by the base classifiers to produce the final output. It is shown that the proposed architecture improves the performance of the widely used single classifiers as well as the standard stacking and classifier ensemble using majority voting. We also present results on the use of various combinations of machine learning classifiers as base classifiers. The experimental results from the proposed architecture indicated an improvement in the performance on all four datasets compared with the standard stacking, base classifiers, and majority voting. Furthermore, on three of these datasets, the proposed architecture outperformed all state-of-the-art systems.


2021 ◽  
Vol 69 (6. ksz.) ◽  
pp. 26-38
Author(s):  
Boglárka Meggyesfalvi

Social media content moderation is an important area to explore, as the number of users and the amount of content are rapidly increasing every year. As an effect of the COVID19 pandemic, people of all ages around the world spend proportionately more time online. While the internet undeniably brings many benefits, the need for effective online policing is even greater now, as the risk of exposure to harmful content grows. In this paper, the aim is to understand the context of how harmful content - such as posts containing child sexual abuse material, terrorist propaganda or explicit violence - is policed online on social media platforms, and how it could be improved. It is intended in this assessment to outline the difficulties in defining and regulating the growing amount of harmful content online, which includes looking at relevant current legal frameworks at development. It is noted that the subjectivity and complexity in moderating content online will remain by the very nature of the subject. It is discussed and critically analysed whose responsibility managing toxic online content should be. It is argued that an environment in which all stakeholders (including supranational organisations, states, law enforcement agencies, companies and users) maximise their participation, and cooperation should be created in order to effectively ensure online safety. Acknowledging the critical role human content moderators play in keeping social media platforms safe online spaces, consideration about their working conditions are raised. They are essential stakeholders in policing (legal and illegal) harmful content; therefore, they have to be treated better for humanistic and practical reasons. Recommendations are outlined such as trying to prevent harmful content from entering social media platforms in the first place, providing moderators better access to mental health support, and using more available technological tools.


2021 ◽  
Vol 10 (2) ◽  
pp. 241-258
Author(s):  
Maryam Abu-Sharida

Harmful content over the internet is going viral nowadays on most of the social media platforms, which has negative effects on both adults and children, especially, with the increasing usage of social media tools during the Covid-19 situation. Therefore, social media’s harmful posts should be regulated. Through the recent legislative efforts, societies are still suffering from the influence of these posts. We observe that the people who share harmful posts often hide behind the First Amendment right and the Freedom of Expression of the American Constitution. This paper focuses on suggesting possible regulations to strike down social media’s harmful content regardless of the platforms it was posted on, to safeguard society from their negative effects. In addition, it highlights the attempts by Qatar’s government to regulate social media crimes and aims to assess if these efforts are enough. Also, it will take a general look at the situation in the United States and how it is dealing with this issue.


2021 ◽  
pp. 54-60
Author(s):  
Alla Kerimovna Polyanina

This article analyzes the representation of state regulatory bodies on distribution of harmful information and measures aimed at minimization harm caused by information to children's health and development. Examination is conducted on the arguments of the law enforcement officers and the court contained in the texts of court ruling, viewed as implementation of formal social control over distribution of information and expression of formal position with regards to social conflict. Attention is given to the analysis of positions of the state authorities on the effects of risk, understanding of social danger of these effects, and governmental decision on the essence of a social conflict. The groups of harmful content and the dynamics of their identification are determined. The author reveals and classifies the motives of the distributor of harmful information and the motives of information consumers. A significant excess of the actual audience of consumers of harmful information over the audience designated by the distributor (i.e. addressee) is observed. The conclusion is made on the position of the actors of formal social control in relation to risks, their identification, mitigation, prevention and forecasting, validity of the arguments of the law enforcement and the court. Failure to establish responsibility of the actor is one of the key difficulties. The author outlines the prospects for improving the mechanism of ensuring information security of the children, as well as underlines the need for revising the principles and approaches towards interpretation of harm.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Olivia Borge ◽  
Victoria Cosgrove ◽  
Elena Cryst ◽  
Shelby Grossman ◽  
Shelby Perkins ◽  
...  

The suicide contagion effect posits that exposure to suicide- related content increases the likelihood of an individual engaging in suicidal behavior. Internet suicide-related queries correlate with suicide prevalence. However, suicide-related searches also lead people to access help resources. This article systematically evaluates the results returned from both general suicide terms and terms related to specific suicide means across three popular search engines—Google, Bing, DuckDuckGo— in both English and Spanish. We find that Bing and DuckDuckGo surface harmful content more often than Google. We assess whether search engines show suicide prevention hotline information, and find that 53% of English queries have this information, compared to 13% of Spanish queries. Looking across platforms, 55% of Google queries include hotline information, compared to 35% for Bing and 10% for DuckDuckGo. Specific suicide means queries are 20% more likely to surface harmful results on Bing and DuckDuckGo compared to general suicide term queries, with no difference on Google.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Hany Farid

It is said that what happens on the internet stays on the internet, forever. In some cases this may be considered a feature. Reports of human rights violations and corporate corruption, for example, should remain part of the public record. In other cases, however, digital immortality may be considered less desirable. Most would agree that terror-related content, child sexual abuse material, non-consensual intimate imagery, and dangerous disinformation, to name a few, should not be so easily found online. Neither human moderation nor artificial intelligence is currently able to contend with the spread of harmful content. Perceptual hashing has emerged as a powerful technology to limit the redistribution of multimedia content (including audio, images, and video). We review how this technology works, its advantages and disadvantages, and how it has been deployed on small- to large-scale platforms.


2021 ◽  
Vol 5 (CSCW2) ◽  
pp. 1-33
Author(s):  
Morgan Klaus Scheuerman ◽  
Jialun Aaron Jiang ◽  
Casey Fiesler ◽  
Jed R. Brubaker
Keyword(s):  

Significance The problem of misinformation, polarisation and harmful content on social media has in recent years exposed the ineffectiveness of self-regulation by platform operators. Yet remedies are difficult to implement. One proposal is to require platforms to submit their algorithms -- the ones used to promote and filter content -- to independent review and audit. Impacts Further civil society reports on the role of social media algorithms in promoting disinformation will strengthen calls for audits. Independent audits will fail to tackle harmful content on end-to-end encrypted platforms, boosting calls to end encryption. Mainstream social media platforms face no legal obligation to stop monetising disinformation.


Sign in / Sign up

Export Citation Format

Share Document