scholarly journals Emotion Label Enhancement via Emotion Wheel and Lexicon

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xueqiang Zeng ◽  
Qifan Chen ◽  
Sufen Chen ◽  
Jiali Zuo

Emotion Distribution Learning (EDL) is a recently proposed multiemotion analysis paradigm, which identifies basic emotions with different degrees of expression in a sentence. Different from traditional methods, EDL quantitatively models the expression degree of the corresponding emotion on the given instance in an emotion distribution. However, emotion labels are crisp in most existing emotion datasets. To utilize traditional emotion datasets in EDL, label enhancement aims to convert logical emotion labels into emotion distributions. This paper proposed a novel label enhancement method, called Emotion Wheel and Lexicon-based emotion distribution Label Enhancement (EWLLE), utilizing the affective words’ linguistic emotional information and the psychological knowledge of Plutchik’s emotion wheel. The EWLLE method generates separate discrete Gaussian distributions for the emotion label of sentence and the emotion labels of sentiment words based on the psychological emotion distance and combines the two types of information into a unified emotion distribution by superposition of the distributions. The extensive experiments on 4 commonly used text emotion datasets showed that the proposed EWLLE method has a distinct advantage over the existing EDL label enhancement methods in the emotion classification task.

Author(s):  
Mei Li ◽  
Jiajun Zhang ◽  
Xiang Lu ◽  
Chengqing Zong

Emotional dialogue generation aims to generate appropriate responses that are content relevant with the query and emotion consistent with the given emotion tag. Previous work mainly focuses on incorporating emotion information into the sequence to sequence or conditional variational auto-encoder (CVAE) models, and they usually utilize the given emotion tag as a conditional feature to influence the response generation process. However, emotion tag as a feature cannot well guarantee the emotion consistency between the response and the given emotion tag. In this article, we propose a novel Dual-View CVAE model to explicitly model the content relevance and emotion consistency jointly. These two views gather the emotional information and the content-relevant information from the latent distribution of responses, respectively. We jointly model the dual-view via VAE to get richer and complementary information. Extensive experiments on both English and Chinese emotion dialogue datasets demonstrate the effectiveness of our proposed Dual-View CVAE model, which significantly outperforms the strong baseline models in both aspects of content relevance and emotion consistency.


Author(s):  
John Abbott ◽  
Anna Maria Bigatti ◽  
Lorenzo Robbiano

The main focus of this paper is on the problem of relating an ideal [Formula: see text] in the polynomial ring [Formula: see text] to a corresponding ideal in [Formula: see text] where [Formula: see text] is a prime number; in other words, the reduction modulo[Formula: see text] of [Formula: see text]. We first define a new notion of [Formula: see text]-good prime for [Formula: see text] which does depends on the term ordering [Formula: see text], but not on the given generators of [Formula: see text]. We relate our notion of [Formula: see text]-good primes to some other similar notions already in the literature. Then we introduce and describe a new invariant called the universal denominator which frees our definition of reduction modulo [Formula: see text] from the term ordering, thus letting us show that all but finitely many primes are good for [Formula: see text]. One characteristic of our approach is that it enables us to easily detect some bad primes, a distinct advantage when using modular methods.


2017 ◽  
Vol 28 (4) ◽  
pp. 494-503 ◽  
Author(s):  
Daniel H. Lee ◽  
Adam K. Anderson

Human eyes convey a remarkable variety of complex social and emotional information. However, it is unknown which physical eye features convey mental states and how that came about. In the current experiments, we tested the hypothesis that the receiver’s perception of mental states is grounded in expressive eye appearance that serves an optical function for the sender. Specifically, opposing features of eye widening versus eye narrowing that regulate sensitivity versus discrimination not only conveyed their associated basic emotions (e.g., fear vs. disgust, respectively) but also conveyed opposing clusters of complex mental states that communicate sensitivity versus discrimination (e.g., awe vs. suspicion). This sensitivity-discrimination dimension accounted for the majority of variance in perceived mental states (61.7%). Further, these eye features remained diagnostic of these complex mental states even in the context of competing information from the lower face. These results demonstrate that how humans read complex mental states may be derived from a basic optical principle of how people see.


2014 ◽  
Vol 1077 ◽  
pp. 246-251
Author(s):  
Bin Yuan ◽  
Tao Jiang ◽  
Hong Zhi Yu

Now micro-blog media is growing fast and micro-blog short text has also become a new type of information carrier. User’s sentiment orientation and emotion of the topic or event in a large number of user’s micro-blog, can not only provide decision-making basis in business but also provide support for government's public opinion monitoring. During micro-blog emotion classification, characteristic information is extracted directly influences the classification effect. This paper uses emotional sentences, emotional symbol, emotional word polarity and other emotional information as classification feature, and use NLP&CC Chinese micro-blog sentiment analysis evaluation standard segmentation of emotion in the polarity based emotion. This paper proposed the Chinese micro-blog sentiment classification based on the feature of amorous feeling. Parallel tests suggested that this method has better classification results, and has verified when micro-blog text’s emotional level is higher, the effect of the method is better.


2019 ◽  
Author(s):  
Jennifer Sorinas ◽  
Juan C. Fernandez-Troyano ◽  
Mikel Val-Calvo ◽  
Jose Manuel Ferrández ◽  
Eduardo Fernandez

ABSTRACTThe large range of potential applications, not only for patients but also for healthy people, that could be achieved by affective BCI (aBCI) makes more latent the necessity of finding a commonly accepted protocol for real-time EEG-based emotion recognition. Based on wavelet package for spectral feature extraction, attending to the nature of the EEG signal, we have specified some of the main parameters needed for the implementation of robust positive and negative emotion classification. 12 seconds has resulted as the most appropriate sliding window size; from that, a set of 20 target frequency-location variables have been proposed as the most relevant features that carry the emotional information. Lastly, QDA and KNN classifiers and population rating criterion for stimuli labeling have been suggested as the most suitable approaches for EEG-base emotion recognition. The proposed model reached a mean accuracy of 98% (s.d. 1.4) and 98.96% (s.d. 1.28) in a subject-dependent approach for QDA and KNN classifier, respectively. This new model represents a step forward towards real-time classification. Although results were not conclusive, new insights regarding subject-independent approximation have been discussed.


2019 ◽  
Author(s):  
Eva Krumhuber ◽  
Dennis Küster ◽  
Shushi Namba ◽  
Datin Shah ◽  
Manual Calvo

The majority of research on the judgment of emotion from facial expressions has focused on deliberately posed displays, often sampled from single stimulus sets. Herein, we investigate emotion recognition from posed and spontaneous expressions, comparing classification performance between humans and machine in a cross-corpora investigation. For this, dynamic facial stimuli portraying the six basic emotions were sampled from a broad range of different databases, and then presented to human observers and a machine classifier. Recognition performance by the machine was found to be superior for posed expressions containing prototypical facial patterns, and comparable to humans when classifying emotions from spontaneous displays. In both humans and machine, accuracy rates were generally higher for posed compared to spontaneous stimuli. The findings suggest that automated systems rely on expression prototypicality for emotion classification, and may perform just as well as humans when tested in a cross-corpora context.


Synthese ◽  
2021 ◽  
Author(s):  
Alexandru Baltag ◽  
Soroush Rafiee Rad ◽  
Sonja Smets

AbstractWe propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds (or more generally, truth in all the worlds that are plausible enough). We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: (1) learning observable evidence obtained by repeated sampling from the unknown distribution; and (2) learning higher-order information about the distribution. The first changes only the plausibility map (via a ‘plausibilistic’ version of Bayes’ Rule), but leaves the given set of possible distributions essentially unchanged; the second rules out some distributions, thus shrinking the set of possibilities, without changing their plausibility ordering.. We look at stability of beliefs under either of these types of learning, defining two related notions (safe belief and statistical knowledge), as well as a measure of the verisimilitude of a given plausibility model. We prove a number of convergence results, showing how our agent’s beliefs track the true probability after repeated sampling, and how she eventually gains in a sense (statistical) knowledge of that true probability. Finally, we sketch the contours of a dynamic doxastic logic for statistical learning.


2015 ◽  
Vol 18 ◽  
Author(s):  
María Verónica Romero-Ferreiro ◽  
Luis Aguado ◽  
Javier Rodriguez-Torresano ◽  
Tomás Palomo ◽  
Roberto Rodriguez-Jimenez

AbstractDeficits in facial affect recognition have been repeatedly reported in schizophrenia patients. The hypothesis that this deficit is caused by poorly differentiated cognitive representation of facial expressions was tested in this study. To this end, performance of patients with schizophrenia and controls was compared in a new emotion-rating task. This novel approach allowed the participants to rate each facial expression at different times in terms of different emotion labels. Results revealed that patients tended to give higher ratings to emotion labels that did not correspond to the portrayed emotion, especially in the case of negative facial expressions (p < .001, η2 = .131). Although patients and controls gave similar ratings when the emotion label matched with the facial expression, patients gave higher ratings on trials with "incorrect" emotion labels (ps < .05). Comparison of patients and controls in a summary index of expressive ambiguity showed that patients perceived angry, fearful and happy faces as more emotionally ambiguous than did the controls (p < .001, η2 = .135). These results are consistent with the idea that the cognitive representation of emotional expressions in schizophrenia is characterized by less clear boundaries and a less close correspondence between facial configurations and emotional states.


2016 ◽  
Vol 23 (2) ◽  
pp. 343-358 ◽  
Author(s):  
Chermen Gogichev

The article looks at idioms as categorization means. On the basis of linguistic analysis of semantic organization of idioms two patterns of idiomatic categorization are argued — general categorization and relevant property based categorization. Cognitive functions of idioms differ with regard to their role as categorization means, idioms can serve different categorization purposes according to two general cognitive processes — static and dynamic — including in a category or considering the given qualities as the reasons for categorization. Moreover, the purpose of categorization was investigated with defining the specificity of the phenomena and its types. The categorization purpose was conceived as different types of information e.g. behavioral expectations or interaction models with the object. The cause-effect relationship between the category and the categorization purpose was claimed.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Guihua Wen ◽  
Huihui Li ◽  
Jubing Huang ◽  
Danyang Li ◽  
Eryang Xun

Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.


Sign in / Sign up

Export Citation Format

Share Document