Recognition of Facial Expressions of Negative Emotions in Romantic Relationships

2015 ◽  
Vol 40 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Seung Hee Yoo ◽  
Sarah E. Noyes
2011 ◽  
Vol 12 (1) ◽  
pp. 77-77
Author(s):  
Sharpley Hsieh ◽  
Olivier Piguet ◽  
John R. Hodges

AbstractIntroduction: Frontotemporal dementia (FTD) is a progressive neurode-generative brain disease characterised clinically by abnormalities in behaviour, cognition and language. Two subgroups, behavioural-variant FTD (bvFTD) and semantic dementia (SD), also show impaired emotion recognition particularly for negative emotions. This deficit has been demonstrated using visual stimuli such as facial expressions. Whether recognition of emotions conveyed through other modalities — for example, music — is also impaired has not been investigated. Methods: Patients with bvFTD, SD and Alzheimer's disease (AD), as well as healthy age-matched controls, labeled tunes according to the emotion conveyed (happy, sad, peaceful or scary). In addition, each tune was also rated along two orthogonal emotional dimensions: valence (pleasant/unpleasant) and arousal (stimulating/relaxing). Participants also undertook a facial emotion recognition test and other cognitive tests. Integrity of basic music detection (tone, tempo) was also examined. Results: Patient groups were matched for disease severity. Overall, patients did not differ from controls with regard to basic music processing or for the recognition of facial expressions. Ratings of valence and arousal were similar across groups. In contrast, SD patients were selectively impaired at recognising music conveying negative emotions (sad and scary). Patients with bvFTD did not differ from controls. Conclusion: Recognition of emotions in music appears to be selectively affected in some FTD subgroups more than others, a disturbance of emotion detection which appears to be modality specific. This finding suggests dissociation in the neural networks necessary for the processing of emotions depending on modality.


2013 ◽  
Vol 44 (2) ◽  
pp. 232-238
Author(s):  
Władysław Łosiak ◽  
Joanna Siedlecka

Abstract Deficits in recognition of facial expressions of emotions are considered to be an important factor explaining impairments in social functioning and affective reactions of schizophrenic patients. Many studies confirmed such deficits while controversies remained concerning the emotion valence and modality. The aim of the study was to explore the process of recognizing facial expressions of emotion in the group of schizophrenic patients by analyzing the role of emotion valence, modality and gender of the model. Results of the group of 35 patients and 35 matched controls indicate that while schizophrenic patients show general impairment in recognizing facial expressions of both positive and the majority of negative emotions, there are differences in deficits for particular emotions. Expressions also appeared to be more ambiguous for the patients while variables connected with gender were found less significant.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


Author(s):  
Yang Gao ◽  
Yincheng Jin ◽  
Seokmin Choi ◽  
Jiyang Li ◽  
Junjie Pan ◽  
...  

Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.


2021 ◽  
Author(s):  
Thomas Murray ◽  
Justin O'Brien ◽  
Veena Kumari

The recognition of negative emotions from facial expressions is shown to decline across the adult lifespan, with some evidence that this decline begins around middle age. While some studies have suggested ageing may be associated with changes in neural response to emotional expressions, it is not known whether ageing is associated with changes in the network connectivity associated with processing emotional expressions. In this study, we examined the effect of participant age on whole-brain connectivity to various brain regions that have been associated with connectivity during emotion processing: the left and right amygdalae, medial prefrontal cortex (mPFC), and right posterior superior temporal sulcus (rpSTS). The study involved healthy participants aged 20-65 who viewed facial expressions displaying anger, fear, happiness, and neutral expressions during functional magnetic resonance imaging (fMRI). We found effects of age on connectivity between the left amygdala and voxels in the occipital pole and cerebellum, between the right amygdala and voxels in the frontal pole, and between the rpSTS and voxels in the orbitofrontal cortex, but no effect of age on connectivity with the mPFC. Furthermore, ageing was more greatly associated with a decline in connectivity to the left amygdala and rpSTS for negative expressions in comparison to happy and neutral expressions, consistent with the literature suggesting a specific age-related decline in the recognition of negative emotions. These results add to the literature surrounding ageing and expression recognition by suggesting that changes in underlying functional connectivity might contribute to changes in recognition of negative facial expressions across the adult lifespan.


Sign in / Sign up

Export Citation Format

Share Document