An Empirical Evaluation of Arabic-Specific Embeddings for Sentiment Analysis

Author(s):  
Amira Barhoumi ◽  
Nathalie Camelin ◽  
Chafik Aloulou ◽  
Yannick Estève ◽  
Lamia Hadrich Belguith
2017 ◽  
Vol 52 (3) ◽  
pp. 2081-2097 ◽  
Author(s):  
Carlos Gómez-Rodríguez ◽  
Iago Alonso-Alonso ◽  
David Vilares

Author(s):  
Hala Mulki ◽  
Hatem Haddad ◽  
Mourad Gridach ◽  
Ismail Babaoğlu

Social media reflects the attitudes of the public towards specific events. Events are often related to persons, locations or organizations, the so-called Named Entities (NEs). This can define NEs as sentiment-bearing components. In this paper, we dive beyond NEs recognition to the exploitation of sentiment-annotated NEs in Arabic sentiment analysis. Therefore, we develop an algorithm to detect the sentiment of NEs based on the majority of attitudes towards them. This enabled tagging NEs with proper tags and, thus, including them in a sentiment analysis framework of two models: supervised and lexicon-based. Both models were applied on datasets of multi-dialectal content. The results revealed that NEs have no considerable impact on the supervised model, while employing NEs in the lexicon-based model improved the classification performance and outperformed most of the baseline systems.


Author(s):  
Ali Bou Nassif ◽  
Abdollah Masoud Darya ◽  
Ashraf Elnagar

This work presents a detailed comparison of the performance of deep learning models such as convolutional neural networks, long short-term memory, gated recurrent units, their hybrids, and a selection of shallow learning classifiers for sentiment analysis of Arabic reviews. Additionally, the comparison includes state-of-the-art models such as the transformer architecture and the araBERT pre-trained model. The datasets used in this study are multi-dialect Arabic hotel and book review datasets, which are some of the largest publicly available datasets for Arabic reviews. Results showed deep learning outperforming shallow learning for binary and multi-label classification, in contrast with the results of similar work reported in the literature. This discrepancy in outcome was caused by dataset size as we found it to be proportional to the performance of deep learning models. The performance of deep and shallow learning techniques was analyzed in terms of accuracy and F1 score. The best performing shallow learning technique was Random Forest followed by Decision Tree, and AdaBoost. The deep learning models performed similarly using a default embedding layer, while the transformer model performed best when augmented with araBERT.


2019 ◽  
Vol 22 (4) ◽  
pp. 741-752 ◽  
Author(s):  
Ajeet Ram Pathak ◽  
Manjusha Pandey ◽  
Siddharth Rautaray

Sign in / Sign up

Export Citation Format

Share Document