facial emotion recognition
Recently Published Documents


TOTAL DOCUMENTS

857
(FIVE YEARS 344)

H-INDEX

48
(FIVE YEARS 6)

Assessment ◽  
2022 ◽  
pp. 107319112110680
Author(s):  
Trevor F. Williams ◽  
Niko Vehabovic ◽  
Leonard J. Simms

Facial emotion recognition (FER) tasks are often digitally altered to vary expression intensity; however, such tasks have unknown psychometric properties. In these studies, an FER task was developed and validated—the Graded Emotional Face Task (GEFT)—which provided an opportunity to examine the psychometric properties of such tasks. Facial expressions were altered to produce five intensity levels for six emotions (e.g., 40% anger). In Study 1, 224 undergraduates viewed subsets of these faces and labeled the expressions. An item selection algorithm was used to maximize internal consistency and balance gender and ethnicity. In Study 2, 219 undergraduates completed the final GEFT and a multimethod battery of validity measures. Finally, in Study 3, 407 undergraduates oversampled for borderline personality disorder (BPD) completed the GEFT and a self-report BPD measure. Broad FER scales (e.g., overall anger) demonstrated evidence of reliability and validity; however, more specific subscales (e.g., 40% anger) had more variable psychometric properties. Notably, ceiling/floor effects appeared to decrease both internal consistency and limit external validity correlations. The findings are discussed from the perspective of measurement issues in the social cognition literature.


Author(s):  
Kaouther MOUHEB ◽  
Ali YÜREKLİ ◽  
Nedzma DERVİSBEGOVİC ◽  
Ridwan Ali MOHAMMED ◽  
Burcu YILMAZEL

Author(s):  
Shih-Chieh Lee ◽  
Gong-Hong Lin ◽  
Ching-Lin Shih ◽  
Kuan-Wei Chen ◽  
Chen-Chung Liu ◽  
...  

Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 47
Author(s):  
Stefano Ziccardi ◽  
Francesco Crescenzo ◽  
Massimiliano Calabrese

Social cognition deficits have been described in people with multiple sclerosis (PwMS), even in absence of a global cognitive impairment, affecting predominantly the ability to adequately process emotions from human faces. The COVID-19 pandemic has forced people to wear face masks that might interfere with facial emotion recognition. Therefore, in the present study, we aimed at investigating the ability of emotion recognition in PwMS from faces wearing masks. We enrolled a total of 42 cognitively normal relapsing–remitting PwMS and a matched group of 20 healthy controls (HCs). Participants underwent a facial emotion recognition task in which they had to recognize from faces wearing or not surgical masks which of the six basic emotions (happiness, anger, fear, sadness, surprise, disgust) was presented. Results showed that face masks negatively affected emotion recognition in all participants (p < 0.001); in particular, PwMS showed a global worse accuracy than HCs (p = 0.005), mainly driven by the “no masked” (p = 0.021) than the “masked” (p = 0.064) condition. Considering individual emotions, PwMS showed a selective impairment in the recognition of fear, compared with HCs, in both the conditions investigated (“masked”: p = 0.023; “no masked”: p = 0.016). Face masks affected negatively also response times (p < 0.001); in particular, PwMS were globally hastier than HCs (p = 0.024), especially in the “masked” condition (p = 0.013). Furthermore, a detailed characterization of the performance of PwMS and HCs in terms of accuracy and response speed was proposed. Results from the present study showed the effect of face masks on the ability to process facial emotions in PwMS, compared with HCs. Healthcare professionals working with PwMS at the time of the COVID-19 outbreak should take into consideration this effect in their clinical practice. Implications in the everyday life of PwMS are also discussed.


2021 ◽  
Vol 14 (1) ◽  
pp. 5
Author(s):  
Peter A. Gloor ◽  
Andrea Fronzetti Colladon ◽  
Erkin Altuntas ◽  
Cengiz Cetinkaya ◽  
Maximilian F. Kaiser ◽  
...  

Can we really “read the mind in the eyes”? Moreover, can AI assist us in this task? This paper answers these two questions by introducing a machine learning system that predicts personality characteristics of individuals on the basis of their face. It does so by tracking the emotional response of the individual’s face through facial emotion recognition (FER) while watching a series of 15 short videos of different genres. To calibrate the system, we invited 85 people to watch the videos, while their emotional responses were analyzed through their facial expression. At the same time, these individuals also took four well-validated surveys of personality characteristics and moral values: the revised NEO FFI personality inventory, the Haidt moral foundations test, the Schwartz personal value system, and the domain-specific risk-taking scale (DOSPERT). We found that personality characteristics and moral values of an individual can be predicted through their emotional response to the videos as shown in their face, with an accuracy of up to 86% using gradient-boosted trees. We also found that different personality characteristics are better predicted by different videos, in other words, there is no single video that will provide accurate predictions for all personality characteristics, but it is the response to the mix of different videos that allows for accurate prediction.


2021 ◽  
Author(s):  
Sarthak Malik ◽  
Puneet Kumar ◽  
Balasubramanian Raman

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260814
Author(s):  
Nazire Duran ◽  
Anthony P. Atkinson

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


Sign in / Sign up

Export Citation Format

Share Document