emotion labels
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 34)

H-INDEX

11
(FIVE YEARS 4)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Sabyasachi Kamila ◽  
Mohammad Hasanuzzaman ◽  
Asif Ekbal ◽  
Pushpak Bhattacharyya

AbstractTemporal orientation is an important aspect of human cognition which shows how an individual emphasizes past, present, and future. Theoretical research in psychology shows that one’s emotional state can influence his/her temporal orientation. We hypothesize that measuring human temporal orientation can benefit from concurrent learning of emotion. To test this hypothesis, we propose a deep learning-based multi-task framework where we concurrently learn a unified model for temporal orientation (our primary task) and emotion analysis (secondary task) using tweets. Our multi-task framework takes users’ tweets as input and produces three temporal orientation labels (past, present or future) and four emotion labels (joy, sadness, anger, or fear) with intensity values as outputs. The classified tweets are then grouped for each user to obtain the user-level temporal orientation and emotion. Finally, we investigate the associations between the users’ temporal orientation and their emotional state. Our analysis reveals that joy and anger are correlated to future orientation while sadness and fear are correlated to the past orientation.


2021 ◽  
Vol 12 ◽  
Author(s):  
Anthony Stahelski ◽  
Amber Anderson ◽  
Nicholas Browitt ◽  
Mary Radeke

Facial inferencing research began with an inadvertent confound. The initial work by Paul Ekman and Wallace Friesen identified the six now-classic facial expressions by the emotion labels chosen by most participants: anger, disgust, fear, happiness, sadness, and surprise. These labels have been used by most of the published facial inference research studies over the last 50 years. However, not all participants in these studies labeled the expressions with the same emotions. For example, that some participants labeled scowling faces as disgusted rather than angry was seen in very early research by Silvan Tomkins and Robert McCarty. Given that the same facial expressions can be paired with different emotions, our research focused on the following questions: Do participants make different personality, temperament, and social trait inferences when assigning different emotion labels to the same facial expression? And what is the stronger cause of trait inferences, the facial expressions themselves, or the emotion labels given to the expressions? Using an online survey format participants were presented with older and younger female and male smiling or scowling faces selected from a validated facial database. Participants responded to questions regarding the social traits of attractiveness, facial maturity, honesty, and threat potential, the temperament traits of positiveness, dominance, excitability, and the Saucier Mini-marker Big Five personality trait adjective scale, while viewing each face. Participants made positive inferences to smiling faces and negative inferences to scowling faces on all dependent variables. Data from participants labeling the scowling faces as angry were compared to those who labeled the faces as disgusted. Results indicate that those labeling the scowling faces as angry perceived the faces significantly more negatively on 11 of the 12 dependent variables than those who labeled the same faces as disgusted. The inferences made by the “disgust” labelers were not positive; just less negative. The results indicate that different emotion labels made to scowling faces can either intensify or reduce negativity in inferences, but the facial expressions themselves determine negativity or positivity.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2847
Author(s):  
Dorota Kamińska ◽  
Kadir Aktas ◽  
Davit Rizhinashvili ◽  
Danila Kuklyanov ◽  
Abdallah Hussein Sham ◽  
...  

Facial emotion recognition is an inherently complex problem due to individual diversity in facial features and racial and cultural differences. Moreover, facial expressions typically reflect the mixture of people’s emotional statuses, which can be expressed using compound emotions. Compound facial emotion recognition makes the problem even more difficult because the discrimination between dominant and complementary emotions is usually weak. We have created a database that includes 31,250 facial images with different emotions of 115 subjects whose gender distribution is almost uniform to address compound emotion recognition. In addition, we have organized a competition based on the proposed dataset, held at FG workshop 2020. This paper analyzes the winner’s approach—a two-stage recognition method (1st stage, coarse recognition; 2nd stage, fine recognition), which enhances the classification of symmetrical emotion labels.


2021 ◽  
Author(s):  
Tanya Sharma ◽  
Manoj Diwakar ◽  
Prabhishek Singh ◽  
Sumita Lamba ◽  
Pramod Kumar ◽  
...  

2021 ◽  
Author(s):  
Laura Israel ◽  
Philipp Paukner ◽  
Lena Schiestel ◽  
Klaus Diepold ◽  
Felix D. Schönbrodt

The Open Library for Affective Videos (OpenLAV) is a new video database for experimental emotion induction. The 188 videos (mean duration: 40 s; range: 12–71 s) have a CC-BY license. Ratings for valence, arousal, several appraisals, and emotion labels were assessed from 434 US-American participants in an online study (on average 70 ratings per video), along with personality traits from the raters (Big 5 personality dimensions and several motive dispositions). The OpenLAV is able to induce a large variety of different emotions, but the videos vary in uniformity of emotion induction. Based on different variability metrics, we recommend videos for the most uniform induction of different emotions. Moreover, the predictive power of personality traits on emotion ratings was analyzed using a machine-learning approach. In contrast to previous research, no effects of personality on the emotional experience were found.


2021 ◽  
Author(s):  
Deboshree Bose ◽  
Vidhyasaharan Sethu ◽  
Eliathamby Ambikairajah
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xueqiang Zeng ◽  
Qifan Chen ◽  
Sufen Chen ◽  
Jiali Zuo

Emotion Distribution Learning (EDL) is a recently proposed multiemotion analysis paradigm, which identifies basic emotions with different degrees of expression in a sentence. Different from traditional methods, EDL quantitatively models the expression degree of the corresponding emotion on the given instance in an emotion distribution. However, emotion labels are crisp in most existing emotion datasets. To utilize traditional emotion datasets in EDL, label enhancement aims to convert logical emotion labels into emotion distributions. This paper proposed a novel label enhancement method, called Emotion Wheel and Lexicon-based emotion distribution Label Enhancement (EWLLE), utilizing the affective words’ linguistic emotional information and the psychological knowledge of Plutchik’s emotion wheel. The EWLLE method generates separate discrete Gaussian distributions for the emotion label of sentence and the emotion labels of sentiment words based on the psychological emotion distance and combines the two types of information into a unified emotion distribution by superposition of the distributions. The extensive experiments on 4 commonly used text emotion datasets showed that the proposed EWLLE method has a distinct advantage over the existing EDL label enhancement methods in the emotion classification task.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1579 ◽  
Author(s):  
Kyoung Ju Noh ◽  
Chi Yoon Jeong ◽  
Jiyoun Lim ◽  
Seungeun Chung ◽  
Gague Kim ◽  
...  

Speech emotion recognition (SER) is a natural method of recognizing individual emotions in everyday life. To distribute SER models to real-world applications, some key challenges must be overcome, such as the lack of datasets tagged with emotion labels and the weak generalization of the SER model for an unseen target domain. This study proposes a multi-path and group-loss-based network (MPGLN) for SER to support multi-domain adaptation. The proposed model includes a bidirectional long short-term memory-based temporal feature generator and a transferred feature extractor from the pre-trained VGG-like audio classification model (VGGish), and it learns simultaneously based on multiple losses according to the association of emotion labels in the discrete and dimensional models. For the evaluation of the MPGLN SER as applied to multi-cultural domain datasets, the Korean Emotional Speech Database (KESD), including KESDy18 and KESDy19, is constructed, and the English-speaking Interactive Emotional Dyadic Motion Capture database (IEMOCAP) is used. The evaluation of multi-domain adaptation and domain generalization showed 3.7% and 3.5% improvements, respectively, of the F1 score when comparing the performance of MPGLN SER with a baseline SER model that uses a temporal feature generator. We show that the MPGLN SER efficiently supports multi-domain adaptation and reinforces model generalization.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Zhenrong Deng ◽  
Hongquan Lin ◽  
Wenming Huang ◽  
Rushi Lan ◽  
Xiaonan Luo

An excellent dialogue system needs to not only generate rich and diverse logical responses but also meet the needs of users for emotional communication. However, despite much work, these two problems have not been solved. In this paper, we propose a model based on conditional variational autoencoder and dual emotion framework (CVAE-DE) to generate emotional responses. In our model, latent variables of the conditional variational autoencoder are adopted to promote the diversity of conversation. A dual emotion framework is adopted to control the explicit emotion of the response and prevent the conversation from generating emotion drift indicating that the emotion of the response is not related to the input sentence. A multiclass emotion classifier based on the Bidirectional Encoder Representations from Transformers (BERT) model is employed to obtain emotion labels, which promotes the accuracy of emotion recognition and emotion expression. A large number of experiments show that our model not only generates rich and diverse responses but also is emotionally coherent and controllable.


Sign in / Sign up

Export Citation Format

Share Document