Emotion Recognition of Facial Expressions Presented in Profile

2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.

2020 ◽  
Author(s):  
Nazire Duran ◽  
ANTHONY P. ATKINSON

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow lead to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 3) and when briefly presented at the mouth (Experiment 2). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260814
Author(s):  
Nazire Duran ◽  
Anthony P. Atkinson

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


Author(s):  
Fernando Marmolejo-Ramos ◽  
Aiko Murata ◽  
Kyoshiro Sasaki ◽  
Yuki Yamada ◽  
Ayumi Ikeda ◽  
...  

Abstract. In this experiment, we replicated the effect of muscle engagement on perception such that the recognition of another’s facial expressions was biased by the observer’s facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multilab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the “pen-in-the-teeth” condition, participants tended to lower their threshold of perception of happy expressions in facial stimuli compared to the “no-pen” condition, thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive happy walkers in the pen-in-the-teeth condition compared to the no-pen condition. This pattern of results was also found in a second experiment in which the no-pen condition was replaced by a situation in which participants held a pen in their lips (“pen-in-lips” condition). These results suggested that facial muscular activity alters the recognition of not only facial expressions but also bodily expressions.


2020 ◽  
Author(s):  
Fernando Marmolejo-Ramos ◽  
Aiko Murata ◽  
Kyoshiro Sasaki ◽  
Yuki Yamada ◽  
Ayumi Ikeda ◽  
...  

In this research, we replicated the effect of muscle engagement on perception such that the recognition of another’s facial expressions was biased by the observer’s facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multi-lab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the ‘pen-in-the-teeth’ condition, participants tended to lower their threshold of perception of ‘happy’ expressions in facial stimuli compared to the ‘no-pen’ condition; thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive ‘happy’ walkers in the ‘pen-in-the-teeth’ compared to the ‘no-pen’ condition. This pattern of results was also found in a second experiment in which the ‘no-pen’ condition was replaced by a situation in which participants held a pen in their lips (‘pen-in-lips’ condition). These results suggested that facial muscular activity not only alters the recognition of facial expressions but also bodily expression.


2019 ◽  
Author(s):  
Adi Lausen ◽  
Christina Broering ◽  
Lars Penke ◽  
Annekathrin Schacht

AbstractSuccessful emotion recognition is a key component of our socio-emotional communication skills. However, little is known about the factors impacting males’ accuracy in emotion recognition tasks. This pre-registered study examined potential candidates, focusing on the modality of stimulus presentation, emotion category, and individual hormone levels. We obtained accuracy and reaction time scores from 312 males who categorized voice, face and voice-face stimuli for nonverbal emotional content. Results showed that recognition accuracy was significantly higher in the audio-visual than in the auditory or visual modality. While no significant association was found for testosterone and cortisol alone, the effect of the interaction with recognition accuracy and reaction time was significant, but small. Our results establish that audio-visual congruent stimuli enhance recognition accuracy and provide novel empirical support by showing that the interaction of testosterone and cortisol modulate to some extent males’ accuracy and response times in emotion recognition tasks.


2019 ◽  
Vol 29 (10) ◽  
pp. 1441-1451 ◽  
Author(s):  
Melina Nicole Kyranides ◽  
Kostas A. Fanti ◽  
Maria Petridou ◽  
Eva R. Kimonis

AbstractIndividuals with callous-unemotional (CU) traits show deficits in facial emotion recognition. According to preliminary research, this impairment may be due to attentional neglect to peoples’ eyes when evaluating emotionally expressive faces. However, it is unknown whether this atypical processing pattern is unique to established variants of CU traits or modifiable with intervention. This study examined facial affect recognition and gaze patterns among individuals (N = 80; M age = 19.95, SD = 1.01 years; 50% female) with primary vs secondary CU variants. These groups were identified based on repeated measurements of conduct problems, CU traits, and anxiety assessed in adolescence and adulthood. Accuracy and number of fixations on areas of interest (forehead, eyes, and mouth) while viewing six dynamic emotions were assessed. A visual probe was used to direct attention to various parts of the face. Individuals with primary and secondary CU traits were less accurate than controls in recognizing facial expressions across all emotions. Those identified in the low-anxious primary-CU group showed reduced overall fixations to fearful and painful facial expressions compared to those in the high-anxious secondary-CU group. This difference was not specific to a region of the face (i.e. eyes or mouth). Findings point to the importance of investigating both accuracy and eye gaze fixations, since individuals in the primary and secondary groups were only differentiated in the way they attended to specific facial expression. These findings have implications for differentiated interventions focused on improving facial emotion recognition with regard to attending and correctly identifying emotions.


2011 ◽  
Vol 12 (1) ◽  
pp. 77-77
Author(s):  
Sharpley Hsieh ◽  
Olivier Piguet ◽  
John R. Hodges

AbstractIntroduction: Frontotemporal dementia (FTD) is a progressive neurode-generative brain disease characterised clinically by abnormalities in behaviour, cognition and language. Two subgroups, behavioural-variant FTD (bvFTD) and semantic dementia (SD), also show impaired emotion recognition particularly for negative emotions. This deficit has been demonstrated using visual stimuli such as facial expressions. Whether recognition of emotions conveyed through other modalities — for example, music — is also impaired has not been investigated. Methods: Patients with bvFTD, SD and Alzheimer's disease (AD), as well as healthy age-matched controls, labeled tunes according to the emotion conveyed (happy, sad, peaceful or scary). In addition, each tune was also rated along two orthogonal emotional dimensions: valence (pleasant/unpleasant) and arousal (stimulating/relaxing). Participants also undertook a facial emotion recognition test and other cognitive tests. Integrity of basic music detection (tone, tempo) was also examined. Results: Patient groups were matched for disease severity. Overall, patients did not differ from controls with regard to basic music processing or for the recognition of facial expressions. Ratings of valence and arousal were similar across groups. In contrast, SD patients were selectively impaired at recognising music conveying negative emotions (sad and scary). Patients with bvFTD did not differ from controls. Conclusion: Recognition of emotions in music appears to be selectively affected in some FTD subgroups more than others, a disturbance of emotion detection which appears to be modality specific. This finding suggests dissociation in the neural networks necessary for the processing of emotions depending on modality.


2013 ◽  
Vol 16 ◽  
Author(s):  
Esther Lázaro ◽  
Imanol Amayra ◽  
Juan Francisco López-Paz ◽  
Amaia Jometón ◽  
Natalia Martín ◽  
...  

AbstractThe assessment of facial expression is an important aspect of a clinical neurological examination, both as an indicator of a mood disorder and as a sign of neurological damage. To date, although studies have been conducted on certain psychosocial aspects of myasthenia, such as quality of life and anxiety, and on neuropsychological aspects such as memory, no studies have directly assessed facial emotion recognition accuracy. The aim of this study was to assess the facial emotion recognition accuracy (fear, surprise, sadness, happiness, anger, and disgust), empathy, and reaction time of patients with myasthenia. Thirty-five patients with myasthenia and 36 healthy controls were tested for their ability to differentiate emotional facial expressions. Participants were matched with respect to age, gender, and education level. Their ability to differentiate emotional facial expressions was evaluated using the computer-based program Feel Test. The data showed that myasthenic patients scored significantly lower (p < 0.05) than healthy controls in the total Feel score, fear, surprise, and higher reaction time. The findings suggest that the ability to recognize facial affect may be reduced in individuals with myasthenia.


2013 ◽  
Vol 113 (1) ◽  
pp. 199-216 ◽  
Author(s):  
Marcella L. Woud ◽  
Eni S. Becker ◽  
Wolf-Gero Lange ◽  
Mike Rinck

A growing body of evidence shows that the prolonged execution of approach movements towards stimuli and avoidance movements away from them affects their evaluation. However, there has been no systematic investigation of such training effects. Therefore, the present study compared approach-avoidance training effects on various valenced representations of one neutral (Experiment 1, N = 85), angry (Experiment 2, N = 87), or smiling facial expressions (Experiment 3, N = 89). The face stimuli were shown on a computer screen, and by means of a joystick, participants pulled half of the faces closer (positive approach movement), and pushed the other half away (negative avoidance movement). Only implicit evaluations of neutral-expression were affected by the training procedure. The boundary conditions of such approach-avoidance training effects are discussed.


2011 ◽  
Vol 268-270 ◽  
pp. 471-475
Author(s):  
Sungmo Jung ◽  
Seoksoo Kim

Many 3D films use technologies of facial expression recognition. In order to use the existing technologies, a large number of markers shall be attached to a face, a camera is fixed in front of the face, and movements of the markers are calculated. However, the markers calculate only the changes in regions where the markers are attached, which makes difficult realistic recognition of facial expressions. Therefore, this study extracted a preliminary eye region in 320*240 by defining specific location values of the eye. And the final eye region was selected from the preliminary region. This study suggests an improved method of detecting an eye region, reducing errors arising from noise.


Sign in / Sign up

Export Citation Format

Share Document