scholarly journals Incorporating Word Significance into Aspect-Level Sentiment Analysis

2019 ◽  
Vol 9 (17) ◽  
pp. 3522
Author(s):  
Refuoe Mokhosi ◽  
ZhiGuang Qin ◽  
Qiao Liu ◽  
Casper Shikali

Aspect-level sentiment analysis has drawn growing attention in recent years, with higher performance achieved through the attention mechanism. Despite this, previous research does not consider some human psychological evidence relating to language interpretation. This results in attention being paid to less significant words especially when the aspect word is far from the relevant context word or when an important context word is found at the end of a long sentence. We design a novel model using word significance to direct attention towards the most significant words, with novelty decay and incremental interpretation factors working together as an alternative for position based models. The interpretation factor represents the maximization of the degree each new encountered word contributes to the sentiment polarity and a counter balancing stretched exponential novelty decay factor represents decaying human reaction as a sentence gets longer. Our findings support the hypothesis that the attention mechanism needs to be applied to the most significant words for sentiment interpretation and that novelty decay is applicable in aspect-level sentiment analysis with a decay factor β = 0.7 .

Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 280
Author(s):  
Shaoxiu Wang ◽  
Yonghua Zhu ◽  
Wenjing Gao ◽  
Meng Cao ◽  
Mengyao Li

The sentiment analysis of microblog text has always been a challenging research field due to the limited and complex contextual information. However, most of the existing sentiment analysis methods for microblogs focus on classifying the polarity of emotional keywords while ignoring the transition or progressive impact of words in different positions in the Chinese syntactic structure on global sentiment, as well as the utilization of emojis. To this end, we propose the emotion-semantic-enhanced bidirectional long short-term memory (BiLSTM) network with the multi-head attention mechanism model (EBILSTM-MH) for sentiment analysis. This model uses BiLSTM to learn feature representation of input texts, given the word embedding. Subsequently, the attention mechanism is used to assign the attentive weights of each words to the sentiment analysis based on the impact of emojis. The attentive weights can be combined with the output of the hidden layer to obtain the feature representation of posts. Finally, the sentiment polarity of microblog can be obtained through the dense connection layer. The experimental results show the feasibility of our proposed model on microblog sentiment analysis when compared with other baseline models.


2010 ◽  
Vol 30 (6) ◽  
pp. 722-731 ◽  
Author(s):  
Aanand D. Naik ◽  
Hardeep Singh

Background . Processes of communication that guide decision making among clinicians collaboratively caring for complex patients are poorly understood and vary based on local contexts. In this paper, the authors characterize these processes and propose a wiki-style communication model to improve coordination of decision making among clinicians using an integrated electronic health record (EHR). Methods . A narrative review of current patterns of communication among clinicians sharing medical decisions focusing on the emerging and potential roles of EHRs to enhance communication among clinicians caring for complex patients. Results . The authors present the taxonomy of decision making and communication among clinicians caring for complex patients. They then adapt wiki-style communication to propose a novel model of communication among clinicians for decision making within multidisciplinary disease management programs. Future innovations using wiki-style communication among clinicians are also described and placed in the context of medical decisions by clinicians working together in disease management programs. Conclusions . EHR-based wiki-style applications may have the potential to improve communication and care coordination among clinicians caring for complex patients. This could lead to improved quality and safety within multidisciplinary disease management programs.


Author(s):  
Bowen Xing ◽  
Lejian Liao ◽  
Dandan Song ◽  
Jingang Wang ◽  
Fuzheng Zhang ◽  
...  

Aspect-based sentiment analysis (ABSA) aims to predict fine-grained sentiments of comments with respect to given aspect terms or categories. In previous ABSA methods, the importance of aspect has been realized and verified. Most existing LSTM-based models take aspect into account via the attention mechanism, where the attention weights are calculated after the context is modeled in the form of contextual vectors. However, aspect-related information may be already discarded and aspect-irrelevant information may be retained in classic LSTM cells in the context modeling process, which can be improved to generate more effective context representations. This paper proposes a novel variant of LSTM, termed as aspect-aware LSTM (AA-LSTM), which incorporates aspect information into LSTM cells in the context modeling stage before the attention mechanism. Therefore, our AA-LSTM can dynamically produce aspect-aware contextual representations. We experiment with several representative LSTM-based models by replacing the classic LSTM cells with the AA-LSTM cells. Experimental results on SemEval-2014 Datasets demonstrate the effectiveness of AA-LSTM.


Author(s):  
Yan Zhou ◽  
Longtao Huang ◽  
Tao Guo ◽  
Jizhong Han ◽  
Songlin Hu

Target-Based Sentiment Analysis aims at extracting opinion targets and classifying the sentiment polarities expressed on each target. Recently, token based sequence tagging methods have been successfully applied to jointly solve the two tasks, which aims to predict a tag for each token. Since they do not treat a target containing several words as a whole, it might be difficult to make use of the global information to identify that opinion target, leading to incorrect extraction. Independently predicting the sentiment for each token may also lead to sentiment inconsistency for different words in an opinion target. In this paper, inspired by span-based methods in NLP, we propose a simple and effective joint model to conduct extraction and classification at span level rather than token level. Our model first emulates spans with one or more tokens and learns their representation based on the tokens inside. And then, a span-aware attention mechanism is designed to compute the sentiment information towards each span. Extensive experiments on three benchmark datasets show that our model consistently outperforms the state-of-the-art methods.


2020 ◽  
Vol 34 (05) ◽  
pp. 9065-9072
Author(s):  
Luu Anh Tuan ◽  
Darsh Shah ◽  
Regina Barzilay

Automatic question generation can benefit many applications ranging from dialogue systems to reading comprehension. While questions are often asked with respect to long documents, there are many challenges with modeling such long documents. Many existing techniques generate questions by effectively looking at one sentence at a time, leading to questions that are easy and not reflective of the human process of question generation. Our goal is to incorporate interactions across multiple sentences to generate realistic questions for long documents. In order to link a broad document context to the target answer, we represent the relevant context via a multi-stage attention mechanism, which forms the foundation of a sequence to sequence model. We outperform state-of-the-art methods on question generation on three question-answering datasets - SQuAD, MS MARCO and NewsQA. 1


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Xiaodi Wang ◽  
Xiaoliang Chen ◽  
Mingwei Tang ◽  
Tian Yang ◽  
Zhen Wang

The aim of aspect-level sentiment analysis is to identify the sentiment polarity of a given target term in sentences. Existing neural network models provide a useful account of how to judge the polarity. However, context relative position information for the target terms is adversely ignored under the limitation of training datasets. Considering position features between words into the models can improve the accuracy of sentiment classification. Hence, this study proposes an improved classification model by combining multilevel interactive bidirectional Gated Recurrent Unit (GRU), attention mechanisms, and position features (MI-biGRU). Firstly, the position features of words in a sentence are initialized to enrich word embedding. Secondly, the approach extracts the features of target terms and context by using a well-constructed multilevel interactive bidirectional neural network. Thirdly, an attention mechanism is introduced so that the model can pay greater attention to those words that are important for sentiment analysis. Finally, four classic sentiment classification datasets are used to deal with aspect-level tasks. Experimental results indicate that there is a correlation between the multilevel interactive attention network and the position features. MI-biGRU can obviously improve the performance of classification.


Sign in / Sign up

Export Citation Format

Share Document