scholarly journals The intensity of emotion: Altered motor simulation impairs processing of facial expressions in congenital facial palsy

2020 ◽  
Author(s):  
Arianna Schiano Lomoriello ◽  
Giulio Caperna ◽  
Elisa De Stefani ◽  
Pier Francesco Ferrari ◽  
Paola Sessa

According to the models of sensorimotor simulation, we recognize others' emotions by subtly mimicking their expressions, which allows us to feel the corresponding emotion via facial feedback. In this contest, facial mimicry, which requires the implicit activation of the motor programs that produce a specific expression, is a crucial phenomenon occurring in emotion recognition, also concerning expression intensity. Consequently, difficulties to produce facial expressions would affect the experience of emotional understanding. In the present investigation, we recruited a sample (N = 11) of patients with Moebius syndrome (MBS), characterized by congenital facial paralysis, and a control group (N = 11) of healthy participants. By leveraging the MBS unique condition, we aimed at investigating the role of facial mimicry and sensorimotor simulation in creating a precise embodied concept of each emotion. The two groups underwent a sensitive facial emotion recognition task, optimally tuned to test sensitivity to emotion intensity and emotion discriminability in terms of their confusability with other emotions. Our study provides evidence of a deficit in recognizing emotions in MBS patients, expressed by a significant decrease in the rating of the intensity of three specific emotion categories, namely sadness, fear and disgust. Moreover, we observed an impairment in detecting these emotions, resulting in a stronger confusability of such emotions with the neutral and the secondary blended emotion. These findings provide support for embodied theories, which hypothesize that sensorimotor systems are involved in the detection and discrimination of emotions.

2020 ◽  
Author(s):  
Arianna Schiano Lomoriello ◽  
Paola Sessa ◽  
Giulio Caperna ◽  
Pier Francesco Ferrari

According to the models of sensorimotor simulation, we recognize others' emotions by subtly mimicking their expressions, which allows us to feel the corresponding emotion via facial feedback. In this contest, facial mimicry, which requires the implicit activation of the motor programs that produce a specific expression, is a crucial phenomenon occurring in emotion recognition, also concerning expression intensity. Consequently, difficulties to produce facial expressions would affect the experience of emotional understanding. In the present investigation, we recruited a sample (N = 11) of patients with Moebius syndrome (MBS), characterized by congenital facial paralysis, and a control group (N = 11) of healthy participants. By leveraging the MBS unique condition, we aimed at investigating the role of facial mimicry and sensorimotor simulation in creating a precise embodied concept of each emotion. The two groups underwent a sensitive facial emotion recognition task, optimally tuned to test sensitivity to emotion intensity and emotion discriminability in terms of their confusability with other emotions. Our study provides evidence of a deficit in recognizing emotions in MBS patients, expressed by a significant decrease in the rating of the intensity of three specific emotion categories, namely sadness, fear and disgust. Moreover, we observed an impairment in detecting these emotions, resulting in a stronger confusability of such emotions with the neutral and the secondary blended emotion. These findings provide support for embodied theories, which hypothesize that sensorimotor systems are involved in the detection and discrimination of emotions.


2019 ◽  
Vol 25 (05) ◽  
pp. 453-461 ◽  
Author(s):  
Katherine Osborne-Crowley ◽  
Sophie C. Andrews ◽  
Izelle Labuschagne ◽  
Akshay Nair ◽  
Rachael Scahill ◽  
...  

AbstractObjectives: Previous research has demonstrated an association between emotion recognition and apathy in several neurological conditions involving fronto-striatal pathology, including Parkinson’s disease and brain injury. In line with these findings, we aimed to determine whether apathetic participants with early Huntington’s disease (HD) were more impaired on an emotion recognition task compared to non-apathetic participants and healthy controls. Methods: We included 43 participants from the TRACK-HD study who reported apathy on the Problem Behaviours Assessment – short version (PBA-S), 67 participants who reported no apathy, and 107 controls matched for age, sex, and level of education. During their baseline TRACK-HD visit, participants completed a battery of cognitive and psychological tests including an emotion recognition task, the Hospital Depression and Anxiety Scale (HADS) and were assessed on the PBA-S. Results: Compared to the non-apathetic group and the control group, the apathetic group were impaired on the recognition of happy facial expressions, after controlling for depression symptomology on the HADS and general disease progression (Unified Huntington’s Disease Rating Scale total motor score). This was despite no difference between the apathetic and non-apathetic group on overall cognitive functioning assessed by a cognitive composite score. Conclusions: Impairment of the recognition of happy expressions may be part of the clinical picture of apathy in HD. While shared reliance on frontostriatal pathways may broadly explain associations between emotion recognition and apathy found across several patient groups, further work is needed to determine what relationships exist between recognition of specific emotions, distinct subtypes of apathy and underlying neuropathology. (JINS, 2019, 25, 453–461)


2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Isabelle Hupont ◽  
Giovanna Varni ◽  
Mohamed Chetouani ◽  
...  

In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people’s spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent) and a human (control). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents’ facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants’ facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor. Further work is needed to corroborate this hypothesis. Nevertheless, our findings shed light on the functioning of human-agent and human-robot mimicry in emotion recognition tasks and help us to unravel the relationship between facial mimicry, liking, and rapport.


2011 ◽  
Vol 198 (4) ◽  
pp. 302-308 ◽  
Author(s):  
Ian M. Anderson ◽  
Clare Shippen ◽  
Gabriella Juhasz ◽  
Diana Chase ◽  
Emma Thomas ◽  
...  

BackgroundNegative biases in emotional processing are well recognised in people who are currently depressed but are less well described in those with a history of depression, where such biases may contribute to vulnerability to relapse.AimsTo compare accuracy, discrimination and bias in face emotion recognition in those with current and remitted depression.MethodThe sample comprised a control group (n = 101), a currently depressed group (n = 30) and a remitted depression group (n = 99). Participants provided valid data after receiving a computerised face emotion recognition task following standardised assessment of diagnosis and mood symptoms.ResultsIn the control group women were more accurate in recognising emotions than men owing to greater discrimination. Among participants with depression, those in remission correctly identified more emotions than controls owing to increased response bias, whereas those currently depressed recognised fewer emotions owing to decreased discrimination. These effects were most marked for anger, fear and sadness but there was no significant emotion × group interaction, and a similar pattern tended to be seen for happiness although not for surprise or disgust. These differences were confined to participants who were antidepressant-free, with those taking antidepressants having similar results to the control group.ConclusionsAbnormalities in face emotion recognition differ between people with current depression and those in remission. Reduced discrimination in depressed participants may reflect withdrawal from the emotions of others, whereas the increased bias in those with a history of depression could contribute to vulnerability to relapse. The normal face emotion recognition seen in those taking medication may relate to the known effects of antidepressants on emotional processing and could contribute to their ability to protect against depressive relapse.


2013 ◽  
Vol 8 (1) ◽  
pp. 75-93 ◽  
Author(s):  
Roy P.C. Kessels ◽  
Barbara Montagne ◽  
Angelique W. Hendriks ◽  
David I. Perrett ◽  
Edward H.F. de Haan

BJPsych Open ◽  
2021 ◽  
Vol 7 (2) ◽  
Author(s):  
Maarten Otter ◽  
Peter M. L. Crins ◽  
Bea C. M. Campforts ◽  
Constance T. R. M. Stumpel ◽  
Thérèse A. M. J. van Amelsvoort ◽  
...  

Background Triple X syndrome (TXS) is caused by aneuploidy of the X chromosome and is associated with impaired social functioning in children; however, its effect on social functioning and emotion recognition in adults is poorly understood. Aims The aim of this study was to investigate social functioning and emotion recognition in adults with TXS. Method This cross-sectional cohort study was designed to compare social functioning and emotion recognition between adults with TXS (n = 34) and an age-matched control group (n = 31). Social functioning was assessed with the Adult Behavior Checklist and Social Responsiveness Scale for Adults. Emotion recognition was assessed with the Emotion Recognition Task in the Cambridge Neuropsychological Test Automated Battery. Differences were analysed by Mann-Whitney U-test. Results Compared with controls, women with TXS scored higher on the Adult Behavior Checklist, including the Withdrawn scale (P < 0.001, effect size 0.4) and Thought Problems scale (P < 0.001, effect size 0.4); and higher on the Social Responsiveness Scale for Adults, indicating impaired social functioning (P < 0.001, effect size 0.5). In addition, women with TXS performed worse on the Emotion Recognition Task, particularly with respect to recognising sadness (P < 0.005, effect size 0.4), fear (P < 0.01, effect size 0.4) and disgust (P < 0.02, effect size 0.3). Conclusions Our findings indicate that adults with TXS have a higher prevalence of impaired social functioning and emotion recognition. These results highlight the relevance of sex chromosome aneuploidy as a potential model for studying disorders characterised by social impairments such as autism spectrum disorder, particularly among women.


2021 ◽  
Vol 12 ◽  
Author(s):  
Lillian Döllinger ◽  
Petri Laukka ◽  
Lennart Björn Högman ◽  
Tanja Bänziger ◽  
Irena Makower ◽  
...  

Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs—one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.


Author(s):  
Sherri C. Widen

At all ages, children interpret and respond to the emotions of others. Traditionally, it has been assumed that children’s emotion knowledge was based on an early understanding of facial expressions in terms of specific, discrete emotions. More recent evidence suggests that this assumption is incorrect. As described by the broad-to-differentiated hypothesis, children’s initial emotion concepts are broad and valence based. Gradually, children differentiate within these initial concepts by linking the different components of an emotion together (e.g., the cause to the consequence, etc.) until their concepts resemble adults’ emotion concepts. Contrary to traditional assumptions, facial expressions are neither the starting point for most emotion concepts nor are they the strongest cue to emotions. Instead, just like any other component of an emotion concept, facial expressions must be differentiated from the valence-based concepts and linked to the other components of the specific emotion concept.


2020 ◽  
Author(s):  
Connor Tom Keating ◽  
Sophie L Sowden ◽  
Dagmar S Fraser ◽  
Jennifer L Cook

Abstract A burgeoning literature suggests that alexithymia, and not autism, is responsible for the difficulties with static emotion recognition that are documented in the autistic population. Here we investigate whether alexithymia can also account for difficulties with dynamic facial expressions. Autistic and control adults (N=60) matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions that varied in speed and spatial exaggeration. The ASD group exhibited significantly lower recognition accuracy for angry, but not happy or sad, expressions with normal speed and spatial exaggeration. The level of autistic, and not alexithymic, traits was a significant predictor of accuracy for angry expressions with normal speed and spatial exaggeration.


Sign in / Sign up

Export Citation Format

Share Document