scholarly journals A Review on the Detection of Offensive Content in Social Media Platforms

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Solomon Akinboro ◽  
Oluwadamilola Adebusoye ◽  
Akintoye Onamade

Offensive content refers to messages which are socially unacceptable including vulgar or derogatory messages. As the use of social media increases worldwide, social media administrators are faced with the challenges of tackling the inclusion of offensive content, to ensure clean and non-abusive or offensive conversations on the platforms they provide.  This work organizes and describes techniques used for the automated detection of offensive languages in social media content in recent times, providing a structured overview of previous approaches, including algorithms, methods and main features used.   Selection was from peer-reviewed articles on Google scholar. Search terms include: Profane words, natural language processing, multilingual context, hybrid methods for detecting profane words and deep learning approach for detecting profane words. Exclusions were made based on some criteria. Initial search returned 203 of which only 40 studies met the inclusion criteria; 6 were on natural language processing, 6 studies were on Deep learning approaches, 5 reports analysed hybrid approaches, multi-level classification/multi-lingual classification appear in 13 reports while 10 reports were on other related methods.The limitations of previous efforts to tackle the challenges with regards to the detection of offensive contents are highlighted to aid future research in this area.  Keywords— algorithm, offensive content, profane words, social media, texts

2017 ◽  
Vol 24 (4) ◽  
pp. 813-821 ◽  
Author(s):  
Anne Cocos ◽  
Alexander G Fiks ◽  
Aaron J Masino

Abstract Objective Social media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media. Materials and Methods We developed a recurrent neural network (RNN) model that labels words in an input sequence with ADR membership tags. The only input features are word-embedding vectors, which can be formed through task-independent pretraining or during ADR detection training. Results Our best-performing RNN model used pretrained word embeddings created from a large, non–domain-specific Twitter dataset. It achieved an approximate match F-measure of 0.755 for ADR identification on the dataset, compared to 0.631 for a baseline lexicon system and 0.65 for the state-of-the-art conditional random field model. Feature analysis indicated that semantic information in pretrained word embeddings boosted sensitivity and, combined with contextual awareness captured in the RNN, precision. Discussion Our model required no task-specific feature engineering, suggesting generalizability to additional sequence-labeling tasks. Learning curve analysis showed that our model reached optimal performance with fewer training examples than the other models. Conclusions ADR detection performance in social media is significantly improved by using a contextually aware model and word embeddings formed from large, unlabeled datasets. The approach reduces manual data-labeling requirements and is scalable to large social media datasets.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Arlene Casey ◽  
Emma Davidson ◽  
Michael Poon ◽  
Hang Dong ◽  
Daniel Duma ◽  
...  

Abstract Background Natural language processing (NLP) has a significant role in advancing healthcare and has been found to be key in extracting structured information from radiology reports. Understanding recent developments in NLP application to radiology is of significance but recent reviews on this are limited. This study systematically assesses and quantifies recent literature in NLP applied to radiology reports. Methods We conduct an automated literature search yielding 4836 results using automated filtering, metadata enriching steps and citation search combined with manual review. Our analysis is based on 21 variables including radiology characteristics, NLP methodology, performance, study, and clinical application characteristics. Results We present a comprehensive analysis of the 164 publications retrieved with publications in 2019 almost triple those in 2015. Each publication is categorised into one of 6 clinical application categories. Deep learning use increases in the period but conventional machine learning approaches are still prevalent. Deep learning remains challenged when data is scarce and there is little evidence of adoption into clinical practice. Despite 17% of studies reporting greater than 0.85 F1 scores, it is hard to comparatively evaluate these approaches given that most of them use different datasets. Only 14 studies made their data and 15 their code available with 10 externally validating results. Conclusions Automated understanding of clinical narratives of the radiology reports has the potential to enhance the healthcare process and we show that research in this field continues to grow. Reproducibility and explainability of models are important if the domain is to move applications into clinical use. More could be done to share code enabling validation of methods on different institutional data and to reduce heterogeneity in reporting of study properties allowing inter-study comparisons. Our results have significance for researchers in the field providing a systematic synthesis of existing work to build on, identify gaps, opportunities for collaboration and avoid duplication.


2021 ◽  
Author(s):  
Benjamin Joseph Ricard ◽  
Saeed Hassanpour

BACKGROUND Many social media studies have explored the ability of thematic structures, such as hashtags and subreddits, to identify information related to a wide variety of mental health disorders. However, studies and models trained on specific themed communities are often difficult to apply to different social media platforms and related outcomes. A deep learning framework using thematic structures from Reddit and Twitter can have distinct advantages for studying alcohol abuse, particularly among the youth, in the United States. OBJECTIVE This study proposes a new deep learning pipeline that uses thematic structures to identify alcohol-related content across different platforms. We applied our method on Twitter to determine the association of the prevalence of alcohol-related tweets and alcohol-related outcomes reported from the National Institute of Alcoholism and Alcohol Abuse (NIAAA), Centers for Disease Control Behavioral Risk Factor Surveillance System (CDC BRFSS), County Health Rankings, and the National Industry Classification System (NAICS). METHODS A Bidirectional Encoder Representations from Transformers (BERT) neural network learned to classify 1,302,524 Reddit posts as either alcohol-related or control subreddits. The trained model identified 24 alcohol-related hashtags from an unlabeled dataset of 843,769 random tweets. Querying alcohol-related hashtags identified 25,558,846 alcohol-related tweets, including 790,544 location-specific (geotagged) tweets. We calculated the correlation of the prevalence of alcohol-related tweets with alcohol-related outcomes, controlling for confounding effects from age, sex, income, education, and self-reported race, as recorded by the 2013-2018 American Community Survey (ACS). RESULTS Here, we present a novel natural language processing pipeline developed using Reddit alcohol-related subreddits that identifies highly specific alcohol-related Twitter hashtags. Prevalence of identified hashtags contains interpretable information about alcohol consumption at both coarse (e.g., U.S. State) and fine-grained (e.g., MMSA, County) geographical designations. This approach can expand research and interventions on alcohol abuse and other behavioral health outcomes. CONCLUSIONS Here, we present a novel natural language processing pipeline developed using Reddit alcohol-related subreddits that identifies highly specific alcohol-related Twitter hashtags. Prevalence of identified hashtags contains interpretable information about alcohol consumption at both coarse (e.g., U.S. State) and fine-grained (e.g., MMSA, County) geographical designations. This approach can expand research and interventions on alcohol abuse and other behavioral health outcomes.


Author(s):  
Suvigya Jain

Abstract: Stock Market has always been one of the most active fields of research, many companies and organizations have focused their research in trying to find better ways to predict market trends. The stock market has been the instrument to measure the performance of a company and many have tried to develop methods that reduce risk for the investors. Since, the implementation of concepts like Deep Learning and Natural Language Processing has been made possible due to modern computing there has been a revolution in forecasting market trends. Also, the democratization of knowledge related to companies made possible due to the internet has provided the stake holders a means to learn about assets they choose to invest in through news media and social media also stock trading has become easier due to apps like robin hood etc. Every company now a days has some kind of social media presence or is usually reported by news media. This presence can lead to the growth of the companies by creating positive sentiment and also many losses by creating negative sentiments due to some public events. Our goal in this paper is to study the influence of news media and social media on market trends using sentiment analysis. Keywords: Deep Learning, Natural Language Processing, Stock Market, Sentiment analysis


Spreading of fake news in online social media is a major nuisance to the public and there is no state of art tool to detect whether a news is a fake or an original one in an automated manner. Hence, this paper analyses the online social media and the news feeds for detection of fake news. The work proposes solution using Natural Language Processing and Deep Learning techniques for detecting the fake news in online social media.


Sign in / Sign up

Export Citation Format

Share Document