Differentiating emotion-label words and emotion-laden words in emotion conflict: an ERP study

2019 ◽  
Vol 237 (9) ◽  
pp. 2423-2430 ◽  
Author(s):  
Juan Zhang ◽  
Chenggang Wu ◽  
Zhen Yuan ◽  
Yaxuan Meng
Keyword(s):  
2021 ◽  
pp. 101257
Author(s):  
Luna De Bruyne ◽  
Pepa Atanasova ◽  
Isabelle Augenstein
Keyword(s):  

2021 ◽  
Vol 11 (5) ◽  
pp. 553
Author(s):  
Chenggang Wu ◽  
Juan Zhang ◽  
Zhen Yuan

In order to explore the affective priming effect of emotion-label words and emotion-laden words, the current study used unmasked (Experiment 1) and masked (Experiment 2) priming paradigm by including emotion-label words (e.g., sadness, anger) and emotion-laden words (e.g., death, gift) as primes and examined how the two kinds of words acted upon the processing of the target words (all emotion-laden words). Participants were instructed to decide the valence of target words, and their electroencephalogram was recorded at the same time. The behavioral and event-related potential (ERP) results showed that positive words produced a priming effect whereas negative words inhibited target word processing (Experiment 1). In Experiment 2, the inhibition effect of negative emotion-label words on emotion word recognition was found in both behavioral and ERP results, suggesting that modulation of emotion word type on emotion word processing could be observed even in the masked priming paradigm. The two experiments further supported the necessity of defining emotion words under an emotion word type perspective. The implications of the findings are proffered. Specifically, a clear understanding of emotion-label words and emotion-laden words can improve the effectiveness of emotional communications in clinical settings. Theoretically, the emotion word type perspective awaits further explorations and is still at its infancy.


2019 ◽  
Vol 699 ◽  
pp. 1-7 ◽  
Author(s):  
Xia Wang ◽  
Chenyu Shangguan ◽  
Jiamei Lu

2015 ◽  
Vol 18 ◽  
Author(s):  
María Verónica Romero-Ferreiro ◽  
Luis Aguado ◽  
Javier Rodriguez-Torresano ◽  
Tomás Palomo ◽  
Roberto Rodriguez-Jimenez

AbstractDeficits in facial affect recognition have been repeatedly reported in schizophrenia patients. The hypothesis that this deficit is caused by poorly differentiated cognitive representation of facial expressions was tested in this study. To this end, performance of patients with schizophrenia and controls was compared in a new emotion-rating task. This novel approach allowed the participants to rate each facial expression at different times in terms of different emotion labels. Results revealed that patients tended to give higher ratings to emotion labels that did not correspond to the portrayed emotion, especially in the case of negative facial expressions (p < .001, η2 = .131). Although patients and controls gave similar ratings when the emotion label matched with the facial expression, patients gave higher ratings on trials with "incorrect" emotion labels (ps < .05). Comparison of patients and controls in a summary index of expressive ambiguity showed that patients perceived angry, fearful and happy faces as more emotionally ambiguous than did the controls (p < .001, η2 = .135). These results are consistent with the idea that the cognitive representation of emotional expressions in schizophrenia is characterized by less clear boundaries and a less close correspondence between facial configurations and emotional states.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Guihua Wen ◽  
Huihui Li ◽  
Jubing Huang ◽  
Danyang Li ◽  
Eryang Xun

Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.


2015 ◽  
Author(s):  
Emma Portch ◽  
Jelena Havelka ◽  
Charity Brown ◽  
Roger Giner-Sorolla

Information about everyday emotional experiences is integrated into internal scripts (e.g. Shaver et al., 1987). Script content provides a context within which to compare and subsequently interpret newly experienced, emotional stimuli, such as facial expressions and behaviours. We explore whether this internal context may also be used to interpret emotional words. In particular, we argue that the ‘meaning’ of emotional verbs may be strongly context-dependent (e.g. Schacht & Sommer, 2009). Harnessing previous context-based methods, we define verb meaning by the degree of association between the behaviours to which they refer and discrete emotional states (e.g. ‘fear’), within emotional scripts (Stevenson, Mikels & James, 2007). We used a self-generation method to derive a set of verbs that participants associated with six universal, emotional states (study 1; see full list in appendix A). Emotion labels acted as script anchors. For each verb, degree of emotionality and discrete association were measured by the number of participants who generated that word. As expected, a different modal exemplar was generated for each discrete emotion. In study 2 we used a rating task to assess the stability of the relationship between modal, or typical, verbs and the emotion label to which they had been generated. Verbs and labels were embedded in a sentence and participants were invited to reflect on their emotional attributions in everyday life to rate the association (‘If you are feeling ‘sad’ how likely would you be to act in the following way?’ e.g. ’cry’). Findings suggest that typical relationships were robust. Participants always gave higher ratings to typical vs. atypical verb and label pairings even when (a) rating direction was manipulated (the label or verb appeared first in the sentence), and (b) the typical behaviours were to be performed by themselves or others ( ‘If someone is sad, how likely are they to act in the following way?’ e.g. ’cry’). Our findings suggest that emotion scripts create verb meaning, and therefore provide a context within which to interpret emotional words. We provide a set of emotion verbs that are robustly associated with discrete, emotional labels/states. This resource may be used by a variety of researchers, including those interested in categorical processing of emotional words and language-mediated facial mimicry.


Author(s):  
Andre Telfer

Studies involving emotion often use animal models and currently rely on manual labelling by researchers. This human-driven labelling approach leads to a number of challenges such as: long analysis times, imprecise results, observer drift, and varying correlation between observers. These problems impact reproducibility, and have contributed to our lack of understanding of fundamental mechanical questions such as how emotions arise from neuronal circuits. Recent success of machine learning models across similar problems show that it can help to mitigate these challenges while meeting or exceeding human accuracy.  We developed a classifier pipeline that takes in videos and produces an emotion label. The pipeline extracts body part positions from each frame using a pose estimator and feeds them into an Artificial Neural Network (ANN) classifier built using stacked Long Short Term Memory (LSTM) layers. The data was collected by treating nine rats with Lypopolysaccharide (LPS) injections (10mg/kg). First, rats were recorded for 10 minutes under control conditions with no manipulation and no observed symptoms of stress or malaise. A week later, rats were injected with LPS and filmed for 10 minutes two hours post-injection.  The classifier pipeline developed correctly labelled 78% of the 125,040 video segments from 8 test videos. When combined with a vote-based system, this led to 7 of the 8 test videos being classified correctly which was the same accuracy attained by a human expert from the lab. The test videos had varying environments and used rats that were different from the training videos, providing evidence of a degree of robustness in the model. Future work will focus on expanding the test data and incorporating models for 3D pose estimation and behavioral classification.


Sign in / Sign up

Export Citation Format

Share Document