Attention Dynamics During Emotion Recognition by Deaf and Hearing Individuals

Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.

2021 ◽  
Vol 12 ◽  
Author(s):  
Xiaoxiao Li

In the natural environment, facial and bodily expressions influence each other. Previous research has shown that bodily expressions significantly influence the perception of facial expressions. However, little is known about the cognitive processing of facial and bodily emotional expressions and its temporal characteristics. Therefore, this study presented facial and bodily expressions, both separately and together, to examine the electrophysiological mechanism of emotional recognition using event-related potential (ERP). Participants assessed the emotions of facial and bodily expressions that varied by valence (positive/negative) and consistency (matching/non-matching emotions). The results showed that bodily expressions induced a more positive P1 component and a shortened latency, whereas facial expressions triggered a more negative N170 and prolonged latency. Among N2 and P3, N2 was more sensitive to inconsistent emotional information and P3 was more sensitive to consistent emotional information. The cognitive processing of facial and bodily expressions had distinctive integrating features, with the interaction occurring in the early stage (N170). The results of the study highlight the importance of facial and bodily expressions in the cognitive processing of emotion recognition.


2019 ◽  
Author(s):  
Jacob Israelashvlili

Previous research has found that individuals vary greatly in emotion differentiation, that is, the extent to which they distinguish between different emotions when reporting on their own feelings. Building on previous work that has shown that emotion differentiation is associated with individual differences in intrapersonal functions, the current study asks whether emotion differentiation is also related to interpersonal skills. Specifically, we examined whether individuals who are high in emotion differentiation would be more accurate in recognizing others’ emotional expressions. We report two studies in which we used an established paradigm tapping negative emotion differentiation and several emotion recognition tasks. In Study 1 (N = 363), we found that individuals high in emotion differentiation were more accurate in recognizing others’ emotional facial expressions. Study 2 (N = 217), replicated this finding using emotion recognition tasks with varying amounts of emotional information. These findings suggest that the knowledge we use to understand our own emotional experience also helps us understand the emotions of others.


2017 ◽  
Vol 29 (5) ◽  
pp. 1749-1761 ◽  
Author(s):  
Johanna Bick ◽  
Rhiannon Luyster ◽  
Nathan A. Fox ◽  
Charles H. Zeanah ◽  
Charles A. Nelson

AbstractWe examined facial emotion recognition in 12-year-olds in a longitudinally followed sample of children with and without exposure to early life psychosocial deprivation (institutional care). Half of the institutionally reared children were randomized into foster care homes during the first years of life. Facial emotion recognition was examined in a behavioral task using morphed images. This same task had been administered when children were 8 years old. Neutral facial expressions were morphed with happy, sad, angry, and fearful emotional facial expressions, and children were asked to identify the emotion of each face, which varied in intensity. Consistent with our previous report, we show that some areas of emotion processing, involving the recognition of happy and fearful faces, are affected by early deprivation, whereas other areas, involving the recognition of sad and angry faces, appear to be unaffected. We also show that early intervention can have a lasting positive impact, normalizing developmental trajectories of processing negative emotions (fear) into the late childhood/preadolescent period.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Kris Evers ◽  
Inneke Kerkhof ◽  
Jean Steyaert ◽  
Ilse Noens ◽  
Johan Wagemans

Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD.


2013 ◽  
Vol 16 ◽  
Author(s):  
Esther Lázaro ◽  
Imanol Amayra ◽  
Juan Francisco López-Paz ◽  
Amaia Jometón ◽  
Natalia Martín ◽  
...  

AbstractThe assessment of facial expression is an important aspect of a clinical neurological examination, both as an indicator of a mood disorder and as a sign of neurological damage. To date, although studies have been conducted on certain psychosocial aspects of myasthenia, such as quality of life and anxiety, and on neuropsychological aspects such as memory, no studies have directly assessed facial emotion recognition accuracy. The aim of this study was to assess the facial emotion recognition accuracy (fear, surprise, sadness, happiness, anger, and disgust), empathy, and reaction time of patients with myasthenia. Thirty-five patients with myasthenia and 36 healthy controls were tested for their ability to differentiate emotional facial expressions. Participants were matched with respect to age, gender, and education level. Their ability to differentiate emotional facial expressions was evaluated using the computer-based program Feel Test. The data showed that myasthenic patients scored significantly lower (p < 0.05) than healthy controls in the total Feel score, fear, surprise, and higher reaction time. The findings suggest that the ability to recognize facial affect may be reduced in individuals with myasthenia.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shu Zhang ◽  
Xinge Liu ◽  
Xuan Yang ◽  
Yezhi Shu ◽  
Niqi Liu ◽  
...  

Cartoon faces are widely used in social media, animation production, and social robots because of their attractive ability to convey different emotional information. Despite their popular applications, the mechanisms of recognizing emotional expressions in cartoon faces are still unclear. Therefore, three experiments were conducted in this study to systematically explore a recognition process for emotional cartoon expressions (happy, sad, and neutral) and to examine the influence of key facial features (mouth, eyes, and eyebrows) on emotion recognition. Across the experiments, three presentation conditions were employed: (1) a full face; (2) individual feature only (with two other features concealed); and (3) one feature concealed with two other features presented. The cartoon face images used in this study were converted from a set of real faces acted by Chinese posers, and the observers were Chinese. The results show that happy cartoon expressions were recognized more accurately than neutral and sad expressions, which was consistent with the happiness recognition advantage revealed in real face studies. Compared with real facial expressions, sad cartoon expressions were perceived as sadder, and happy cartoon expressions were perceived as less happy, regardless of whether full-face or single facial features were viewed. For cartoon faces, the mouth was demonstrated to be a feature that is sufficient and necessary for the recognition of happiness, and the eyebrows were sufficient and necessary for the recognition of sadness. This study helps to clarify the perception mechanism underlying emotion recognition in cartoon faces and sheds some light on directions for future research on intelligent human-computer interactions.


2020 ◽  
Author(s):  
Connor Tom Keating ◽  
Sophie L Sowden ◽  
Dagmar S Fraser ◽  
Jennifer L Cook

Abstract A burgeoning literature suggests that alexithymia, and not autism, is responsible for the difficulties with static emotion recognition that are documented in the autistic population. Here we investigate whether alexithymia can also account for difficulties with dynamic facial expressions. Autistic and control adults (N=60) matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions that varied in speed and spatial exaggeration. The ASD group exhibited significantly lower recognition accuracy for angry, but not happy or sad, expressions with normal speed and spatial exaggeration. The level of autistic, and not alexithymic, traits was a significant predictor of accuracy for angry expressions with normal speed and spatial exaggeration.


2021 ◽  
Vol 12 ◽  
Author(s):  
Agnes Bohne ◽  
Dag Nordahl ◽  
Åsne A. W. Lindahl ◽  
Pål Ulvenes ◽  
Catharina E. A. Wang ◽  
...  

Processing of emotional facial expressions is of great importance in interpersonal relationships. Aberrant engagement with facial expressions, particularly an engagement with sad faces, loss of engagement with happy faces, and enhanced memory of sadness has been found in depression. Since most studies used adult faces, we here examined if such biases also occur in processing of infant faces in those with depression or depressive symptoms. In study 1, we recruited 25 inpatient women with major depression and 25 matched controls. In study 2, we extracted a sample of expecting parents from the NorBaby study, where 29 reported elevated levels of depressive symptoms, and 29 were matched controls. In both studies, we assessed attentional bias with a dot-probe task using happy, sad and neutral infant faces, and facial memory bias with a recognition task using happy, sad, angry, afraid, surprised, disgusted and neutral infant and adult faces. Participants also completed the Ruminative Responses Scale and Becks Depression Inventory-II. In study 1, we found no group difference in either attention to or memory accuracy for emotional infant faces. Neither attention nor recognition was associated with rumination. In study 2, we found that the group with depressive symptoms disengaged more slowly than healthy controls from sad infant faces, and this was related to rumination. The results place emphasis on the importance of emotional self-relevant material when examining cognitive processing in depression. Together, these studies demonstrate that a mood-congruent attentional bias to infant faces is present in expecting parents with depressive symptoms, but not in inpatients with Major Depression Disorder who do not have younger children.


2019 ◽  
Author(s):  
Mircea Zloteanu

People hold strong beliefs regarding the role of emotional cues in detecting deception. While research on the diagnostic value of such cues has been mixed, their influence on human veracity judgments should not be ignored. Here, we address the relationship between emotional information and veracity judgments. In Study 1, the role of emotion recognition in the process of detecting naturalistic lies was investigated. Decoders’ accuracy was compared based on differences in trait empathy and their ability to recognize microexpressions and subtle expressions. Accuracy was found to be unrelated to facial cue recognition but negatively related to empathy. In Study 2, we manipulated decoders’ emotion recognition ability and the type of lies they saw: experiential or affective. Decoders either received emotion recognition training, bogus training, or no training. In all scenarios, training was not found to impact on accuracy. Experiential lies were easier to detect than affective lies, but, affective emotional lies were easier to detect than affective unemotional lies. The findings suggest that emotion recognition has a complex relationship with veracity judgments.


Author(s):  
Nikolaus Kleindienst ◽  
Sophie Hauschild ◽  
Lisa Liebke ◽  
Janine Thome ◽  
Katja Bertsch ◽  
...  

Abstract Background Impairments in the domain of interpersonal functioning such as the feeling of loneliness and fear of abandonment have been associated with a negative bias during processing of social cues in Borderline Personality Disorder (BPD). Since these symptoms show low rates of remission, high rates of recurrence and are relatively resistant to treatment, in the present study we investigated whether a negative bias during social cognitive processing exists in BPD even after symptomatic remission. We focused on facial emotion recognition since it is one of the basal social-cognitive processes required for successful social interactions and building relationships. Methods Ninety-eight female participants (46 symptom-remitted BPD [r-BPD]), 52 healthy controls [HC]) rated the intensity of anger and happiness in ambiguous (anger/happiness blends) and unambiguous (emotion/neutral blends) emotional facial expressions. Additionally, participants assessed the confidence they experienced in their own judgments. Results R-BPD participants assessed ambiguous expressions as less happy and as more angry when the faces displayed predominantly happiness. Confidence in these judgments did not differ between groups, but confidence in judging happiness in predominantly happy faces was lower in BPD patients with a higher level of BPD psychopathology. Conclusions Evaluating social cues that signal the willingness to affiliate is characterized by a negative bias that seems to be a trait-like feature of social cognition in BPD. In contrast, confidence in judging positive social signals seems to be a state-like feature of emotion recognition in BPD that improves with attenuation in the level of acute BPD symptoms.


Sign in / Sign up

Export Citation Format

Share Document