scholarly journals Domain Specific Learning for Sentiment Classification and Activity Recognition

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 53611-53619
Author(s):  
Hong-Bo Wang ◽  
Yanze Xue ◽  
Xiaoxiao Zhen ◽  
Xuyan Tu
2018 ◽  
Vol 6 ◽  
pp. 269-285 ◽  
Author(s):  
Andrius Mudinas ◽  
Dell Zhang ◽  
Mark Levene

There is often the need to perform sentiment classification in a particular domain where no labeled document is available. Although we could make use of a general-purpose off-the-shelf sentiment classifier or a pre-built one for a different domain, the effectiveness would be inferior. In this paper, we explore the possibility of building domain-specific sentiment classifiers with unlabeled documents only. Our investigation indicates that in the word embeddings learned from the unlabeled corpus of a given domain, the distributed word representations (vectors) for opposite sentiments form distinct clusters, though those clusters are not transferable across domains. Exploiting such a clustering structure, we are able to utilize machine learning algorithms to induce a quality domain-specific sentiment lexicon from just a few typical sentiment words (“seeds”). An important finding is that simple linear model based supervised learning algorithms (such as linear SVM) can actually work better than more sophisticated semi-supervised/transductive learning algorithms which represent the state-of-the-art technique for sentiment lexicon induction. The induced lexicon could be applied directly in a lexicon-based method for sentiment classification, but a higher performance could be achieved through a two-phase bootstrapping method which uses the induced lexicon to assign positive/negative sentiment scores to unlabeled documents first, a nd t hen u ses those documents found to have clear sentiment signals as pseudo-labeled examples to train a document sentiment classifier v ia supervised learning algorithms (such as LSTM). On several benchmark datasets for document sentiment classification, our end-to-end pipelined approach which is overall unsupervised (except for a tiny set of seed words) outperforms existing unsupervised approaches and achieves an accuracy comparable to that of fully supervised approaches.


2018 ◽  
Vol 160 ◽  
pp. 1-15 ◽  
Author(s):  
Yijing Li ◽  
Haixiang Guo ◽  
Qingpeng Zhang ◽  
Mingyun Gu ◽  
Jianying Yang

2009 ◽  
Vol 37 (1) ◽  
pp. 10-20 ◽  
Author(s):  
Benjamin Martin Bly ◽  
Ricardo E. Carrión ◽  
Björn Rasch

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 270
Author(s):  
Hanqian Wu ◽  
Zhike Wang ◽  
Feng Qing ◽  
Shoushan Li

Though great progress has been made in the Aspect-Based Sentiment Analysis(ABSA) task through research, most of the previous work focuses on English-based ABSA problems, and there are few efforts on other languages mainly due to the lack of training data. In this paper, we propose an approach for performing a Cross-Lingual Aspect Sentiment Classification (CLASC) task which leverages the rich resources in one language (source language) for aspect sentiment classification in a under-resourced language (target language). Specifically, we first build a bilingual lexicon for domain-specific training data to translate the aspect category annotated in the source-language corpus and then translate sentences from the source language to the target language via Machine Translation (MT) tools. However, most MT systems are general-purpose, it non-avoidably introduces translation ambiguities which would degrade the performance of CLASC. In this context, we propose a novel approach called Reinforced Transformer with Cross-Lingual Distillation (RTCLD) combined with target-sensitive adversarial learning to minimize the undesirable effects of translation ambiguities in sentence translation. We conduct experiments on different language combinations, treating English as the source language and Chinese, Russian, and Spanish as target languages. The experimental results show that our proposed approach outperforms the state-of-the-art methods on different target languages.


2008 ◽  
Vol 31 (5) ◽  
pp. 532-533 ◽  
Author(s):  
Teresa Satterfield

AbstractChristiansen & Chater (C&C) focus solely on general-purpose cognitive processes in their elegant conceptualization of language evolution. However, numerous developmental facts attested in L1 acquisition confound C&C's subsequent claim that the logical problem of language acquisition now plausibly recapitulates that of language evolution. I argue that language acquisition should be viewed instead as a multi-layered construction involving the interplay of general and domain-specific learning mechanisms.


Sign in / Sign up

Export Citation Format

Share Document