Neural knowledge transfer for low-source sentiment analysis : cross-domain, cross-task & cross-lingual

Author(s):  
Zheng Li
2021 ◽  
Vol 11 (3) ◽  
pp. 29-45
Author(s):  
Kwun-Ping Lai ◽  
Jackie Chun-Sing Ho ◽  
Wai Lam

The authors investigate the problem task of multi-source cross-domain sentiment classification under the constraint of little labeled data. The authors propose a novel model which is capable of capturing both sentiment terms with strong or weak polarity from various source domains which are useful for knowledge transfer to unlabeled target domain. The authors propose a two-step training strategy with different granularities helping the model to identify sentiment terms with different degrees of sentiment polarity. Specifically, the coarse-grained training step captures the strong sentiment terms from the whole review while the fine-grained training step focuses on the latent fine-grained sentence sentiment which are helpful under the constraint of little labeled data. Experiments on a real-world product review dataset show that the proposed model has a good performance even under the little labeled data constraint.


2015 ◽  
Author(s):  
Qiang Chen ◽  
Wenjie Li ◽  
Yu Lei ◽  
Xule Liu ◽  
Yanxiang He

Author(s):  
Preeti Arora ◽  
Deepali Virmani ◽  
P.S. Kulkarni

Sentiment analysis is the pre-eminent technology to extract the relevant information from the data domain. In this paper cross domain sentimental classification approach Cross_BOMEST is proposed. Proposed approach will extract <strong>†</strong>ve words using existing BOMEST technique, with the help of Ms Word Introp, Cross_BOMEST determines <strong>†</strong>ve words and replaces all its synonyms to escalate the polarity and blends two different domains and detects all the self-sufficient words. Proposed Algorithm is executed on Amazon datasets where two different domains are trained to analyze sentiments of the reviews of the other remaining domain. Proposed approach contributes propitious results in the cross domain analysis and accuracy of 92 % is obtained. Precision and Recall of BOMEST is improved by 16% and 7% respectively by the Cross_BOMEST.


2012 ◽  
Vol 9 (3) ◽  
pp. 1231-1247 ◽  
Author(s):  
Mihaela Colhon

In this paper we present a method for an English-Romanian treebank construction, together with the obtained evaluation results. The treebank is built upon a parallel English-Romanian corpus word-aligned and annotated at the morphological and syntactic level. The syntactic trees of the Romanian texts are generated by considering the syntactic phrases of the English parallel texts automatically resulted from syntactic parsing. The method reuses and adjusts existing tools and algorithms for cross-lingual transfer of syntactic constituents and syntactic trees alignment.


Author(s):  
Qiang Chen ◽  
Wenjie Li ◽  
Yu Lei ◽  
Xule Liu ◽  
Chuwei Luo ◽  
...  

Author(s):  
Shu Jiang ◽  
Zuchao Li ◽  
Hai Zhao ◽  
Bao-Liang Lu ◽  
Rui Wang

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.


Sign in / Sign up

Export Citation Format

Share Document