scholarly journals Adapting deep learning for sentiment classification of code-switched informal short text

Author(s):  
Muhammad Haroon Shakeel ◽  
Asim Karim
2021 ◽  
pp. 211-218
Author(s):  
Sarah Anis ◽  
Sally Saad ◽  
Mostafa Aref

2020 ◽  
Vol 1684 ◽  
pp. 012047
Author(s):  
Zhichao Zhu ◽  
Zui Zhu ◽  
Wenjun Zhu

2017 ◽  
Vol 23 (4) ◽  
pp. 268-273
Author(s):  
Sunjae Kwon ◽  
Juae Kim ◽  
Sangwoo Kang ◽  
Jungyun Seo

Symmetry ◽  
2019 ◽  
Vol 12 (1) ◽  
pp. 8
Author(s):  
Jing Chen ◽  
Jun Feng ◽  
Xia Sun ◽  
Yang Liu

Sentiment classification of forum posts of massive open online courses is essential for educators to make interventions and for instructors to improve learning performance. Lacking monitoring on learners’ sentiments may lead to high dropout rates of courses. Recently, deep learning has emerged as an outstanding machine learning technique for sentiment classification, which extracts complex features automatically with rich representation capabilities. However, deep neural networks always rely on a large amount of labeled data for supervised training. Constructing large-scale labeled training datasets for sentiment classification is very laborious and time consuming. To address this problem, this paper proposes a co-training, semi-supervised deep learning model for sentiment classification, leveraging limited labeled data and massive unlabeled data simultaneously to achieve performance comparable to those methods trained on massive labeled data. To satisfy the condition of two views of co-training, we encoded texts into vectors from views of word embedding and character-based embedding independently, considering words’ external and internal information. To promote the classification performance with limited data, we propose a double-check strategy sample selection method to select samples with high confidence to augment the training set iteratively. In addition, we propose a mixed loss function both considering the labeled data with asymmetric and unlabeled data. Our proposed method achieved a 89.73% average accuracy and an 93.55% average F1-score, about 2.77% and 3.2% higher than baseline methods. Experimental results demonstrate the effectiveness of the proposed model trained on limited labeled data, which performs much better than those trained on massive labeled data.


Sign in / Sign up

Export Citation Format

Share Document