scholarly journals Factors Affecting the Recognition Accuracy of Facial Expressions

2017 ◽  
Vol 1 (4) ◽  
Author(s):  
Sajid Ali Khan
2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2013 ◽  
Vol 16 ◽  
Author(s):  
Esther Lázaro ◽  
Imanol Amayra ◽  
Juan Francisco López-Paz ◽  
Amaia Jometón ◽  
Natalia Martín ◽  
...  

AbstractThe assessment of facial expression is an important aspect of a clinical neurological examination, both as an indicator of a mood disorder and as a sign of neurological damage. To date, although studies have been conducted on certain psychosocial aspects of myasthenia, such as quality of life and anxiety, and on neuropsychological aspects such as memory, no studies have directly assessed facial emotion recognition accuracy. The aim of this study was to assess the facial emotion recognition accuracy (fear, surprise, sadness, happiness, anger, and disgust), empathy, and reaction time of patients with myasthenia. Thirty-five patients with myasthenia and 36 healthy controls were tested for their ability to differentiate emotional facial expressions. Participants were matched with respect to age, gender, and education level. Their ability to differentiate emotional facial expressions was evaluated using the computer-based program Feel Test. The data showed that myasthenic patients scored significantly lower (p < 0.05) than healthy controls in the total Feel score, fear, surprise, and higher reaction time. The findings suggest that the ability to recognize facial affect may be reduced in individuals with myasthenia.


Author(s):  
B. H. Shekar ◽  
S. S. Bhat ◽  
A. Maysuradze

<p><strong>Abstract.</strong> Iris code matching is an important stage of iris biometric systems which compares the input iris code with stored patterns of enrolled iris codes and classifies the code into one of classes so that, the claim is accepted or rejected. Several classifier based approaches are proposed by the researchers to improve the recognition accuracy. In this paper, we discuss the factors affecting an iris classifier’s performance and we propose a reliability index for iris matching techniques to quantitatively measure the extent of system reliability, based on false acceptance rate and false rejection rates using Monte Carlo Simulation. Experiments are carried out on benchmark databases such as, IITD, MMU v-2, CASIA v-4 Distance and UBIRIS v.2.</p>


2018 ◽  
Vol 33 (4) ◽  
pp. 19-27 ◽  
Author(s):  
Vanessa Fasolt ◽  
Iris J. Holzleitner ◽  
Anthony J. Lee ◽  
Kieran J. O'Shea ◽  
Lisa M. DeBruine

2020 ◽  
Author(s):  
Nazire Duran ◽  
ANTHONY P. ATKINSON

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow lead to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 3) and when briefly presented at the mouth (Experiment 2). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260814
Author(s):  
Nazire Duran ◽  
Anthony P. Atkinson

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 1076-1079

Automated facial expression recognition can greatly improve the human–machine interface. Many deep learning approaches have been applied in recent years due to their outstanding recognition accuracy after training with large amounts of data. In this research, we enhanced Convolutional Neural Network method to recognize 6 basic emotions and compared some pre processing methods to show the influences of its in CNN performance. The preprocessing methods are :resizing, mean, normalization, standard deviation, scaling and edge detection . Face detection as single pre-processing phase achieved significant result with 100 % of accuracy, compared with another pre-processing phase and raw data.


2020 ◽  
Author(s):  
Connor Tom Keating ◽  
Sophie L Sowden ◽  
Dagmar S Fraser ◽  
Jennifer L Cook

Abstract A burgeoning literature suggests that alexithymia, and not autism, is responsible for the difficulties with static emotion recognition that are documented in the autistic population. Here we investigate whether alexithymia can also account for difficulties with dynamic facial expressions. Autistic and control adults (N=60) matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions that varied in speed and spatial exaggeration. The ASD group exhibited significantly lower recognition accuracy for angry, but not happy or sad, expressions with normal speed and spatial exaggeration. The level of autistic, and not alexithymic, traits was a significant predictor of accuracy for angry expressions with normal speed and spatial exaggeration.


2019 ◽  
Author(s):  
Paddy Ross ◽  
Tessa R. Flack

Emotion perception research has largely been dominated by work on facial expressions, but emotion is also strongly conveyed from the body. Research exploring emotion recognition from the body tends to refer to ‘the body’ as a whole entity. However, the body is made up of different components (hands, arms, trunk etc.), all of which could be differentially contributing to emotion recognition. We know that the hands can help to convey actions, and in particular are important for social communication through gestures, but we currently do not know to what extent the hands influence emotion recognition from the body. Here, 93 adults viewed static emotional body stimuli with either the hands, arms, or both components removed and completed a forced-choice emotion recognition task. Removing the hands significantly reduced recognition accuracy for fear and anger, but made no significant difference to the recognition of happiness and sadness. Removing the arms had no effect on emotion recognition accuracy compared to the full-body stimuli. These results suggest the hands may play a key role in the recognition of emotions from the body.


Sign in / Sign up

Export Citation Format

Share Document