Learning sentiment-inherent word embedding for word-level and sentence-level sentiment analysis

Author(s):  
Zhihua Zhang ◽  
Man Lan

Since a decade research over sentiment analysis and opinion mining was evolving slowing and emerging widely with greater perspectives and objectives. Sentiment analysis is an important task in order to gain insights over the huge amounts of opinions that are generated on a daily basis. This analysis relies on the opinions made by the individuals. These opinions are text, may be positive or negative or a phrase which gives significance to the context. Also these opinions have the power of expressing the context besides drags the attention of new folks. Expressing such opinions ranges from documents level, to the sentence level, to phrase level, to word level and to special symbol level. All these opinion types are labelled with common name Sentiment Analysis. Sentiment Analysis is health care is evolving narrowly with wider research strings. This paper mainly focuses in identifying Sentiments in health care. These sentiments can be medical test values which may be numeric and nominal; sometimes in text too. Such sentiments are identified with pre-fragmentation of data set and Pointwise Mutual Information measure. To accomplish this data of hypertensive pregnant women is considered.


Author(s):  
Jingjing Wang ◽  
Jie Li ◽  
Shoushan Li ◽  
Yangyang Kang ◽  
Min Zhang ◽  
...  

Aspect sentiment classification, a challenging task in sentiment analysis, has been attracting more and more attention in recent years. In this paper, we highlight the need for incorporating the importance degrees of both words and clauses inside a sentence and propose a hierarchical network with both word-level and clause-level attentions to aspect sentiment classification. Specifically, we first adopt sentence-level discourse segmentation to segment a sentence into several clauses. Then, we leverage multiple Bi-directional LSTM layers to encode all clauses and propose a word-level attention layer to capture the importance degrees of words in each clause. Third and finally, we leverage another Bi-directional LSTM layer to encode the outputs from the former layers and propose a clause-level attention layer to capture the importance degrees of all the clauses inside a sentence. Experimental results on the laptop and restaurant datasets from SemEval-2015 demonstrate the effectiveness of our proposed approach to aspect sentiment classification.


2019 ◽  
Vol 55 (2) ◽  
pp. 445-468 ◽  
Author(s):  
Aleksander Wawer

Abstract This article is a comprehensive review of freely available tools and software for sentiment analysis of texts written in Polish. It covers solutions which deal with all levels of linguistic analysis: starting from word-level, through phrase-level and up to sentence-level sentiment analysis. Technically, the tools include dictionaries, rule-based systems as well as deep neural networks. The text also describes a solution for finding opinion targets. The article also contains remarks that compare the landscape of available tools in Polish with that for English language. It is useful from the standpoint of multiple disciplines, not only information technology and computer science, but applied linguistics and social sciences.


Author(s):  
Dang Van Thin ◽  
Ngan Luu-Thuy Nguyen ◽  
Tri Minh Truong ◽  
Lac Si Le ◽  
Duy Tin Vo

Aspect-based sentiment analysis has been studied in both research and industrial communities over recent years. For the low-resource languages, the standard benchmark corpora play an important role in the development of methods. In this article, we introduce two benchmark corpora with the largest sizes at sentence-level for two tasks: Aspect Category Detection and Aspect Polarity Classification in Vietnamese. Our corpora are annotated with high inter-annotator agreements for the restaurant and hotel domains. The release of our corpora would push forward the low-resource language processing community. In addition, we deploy and compare the effectiveness of supervised learning methods with a single and multi-task approach based on deep learning architectures. Experimental results on our corpora show that the multi-task approach based on BERT architecture outperforms the neural network architectures and the single approach. Our corpora and source code are published on this footnoted site. 1


2021 ◽  
Vol 14 (4) ◽  
pp. 1-24
Author(s):  
Sushant Kafle ◽  
Becca Dingman ◽  
Matt Huenerfauth

There are style guidelines for authors who highlight important words in static text, e.g., bolded words in student textbooks, yet little research has investigated highlighting in dynamic texts, e.g., captions during educational videos for Deaf or Hard of Hearing (DHH) users. In our experimental study, DHH participants subjectively compared design parameters for caption highlighting, including: decoration (underlining vs. italicizing vs. boldfacing), granularity (sentence level vs. word level), and whether to highlight only the first occurrence of a repeating keyword. In partial contrast to recommendations in prior research, which had not been based on experimental studies with DHH users, we found that DHH participants preferred boldface, word-level highlighting in captions. Our empirical results provide guidance for the design of keyword highlighting during captioned videos for DHH users, especially in educational video genres.


Sign in / Sign up

Export Citation Format

Share Document