Mature and immature facial features influence the recognition of angry and fearful facial expressions

2007 ◽  
Author(s):  
Donald Sacco ◽  
Kurt Hugenberg
2014 ◽  
Vol 10 (5) ◽  
pp. 20140275 ◽  
Author(s):  
Sharlene E. Santana ◽  
Seth D. Dobson ◽  
Rui Diogo

Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution.


2020 ◽  
Author(s):  
Navin Ipe

The recognition of emotions via facial expressions is a complex process of piecing together various aspects of each facial feature. Since viewing a single facial feature in isolation may result in an inaccurate recognition of emotion, this paper attempts training neural networks to first identify specific facial features in isolation, and then use the general pattern of expressions on the face to identify the overall emotion. The technique presented is very basic, and can definitely be improved with more advanced techniques that incorporate time<br>and context.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shu Zhang ◽  
Xinge Liu ◽  
Xuan Yang ◽  
Yezhi Shu ◽  
Niqi Liu ◽  
...  

Cartoon faces are widely used in social media, animation production, and social robots because of their attractive ability to convey different emotional information. Despite their popular applications, the mechanisms of recognizing emotional expressions in cartoon faces are still unclear. Therefore, three experiments were conducted in this study to systematically explore a recognition process for emotional cartoon expressions (happy, sad, and neutral) and to examine the influence of key facial features (mouth, eyes, and eyebrows) on emotion recognition. Across the experiments, three presentation conditions were employed: (1) a full face; (2) individual feature only (with two other features concealed); and (3) one feature concealed with two other features presented. The cartoon face images used in this study were converted from a set of real faces acted by Chinese posers, and the observers were Chinese. The results show that happy cartoon expressions were recognized more accurately than neutral and sad expressions, which was consistent with the happiness recognition advantage revealed in real face studies. Compared with real facial expressions, sad cartoon expressions were perceived as sadder, and happy cartoon expressions were perceived as less happy, regardless of whether full-face or single facial features were viewed. For cartoon faces, the mouth was demonstrated to be a feature that is sufficient and necessary for the recognition of happiness, and the eyebrows were sufficient and necessary for the recognition of sadness. This study helps to clarify the perception mechanism underlying emotion recognition in cartoon faces and sheds some light on directions for future research on intelligent human-computer interactions.


2020 ◽  
Author(s):  
Nazire Duran ◽  
ANTHONY P. ATKINSON

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow lead to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 3) and when briefly presented at the mouth (Experiment 2). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260814
Author(s):  
Nazire Duran ◽  
Anthony P. Atkinson

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


2018 ◽  
Vol 7 (3.20) ◽  
pp. 284
Author(s):  
Hamimah Ujir ◽  
Irwandi Hipiny ◽  
D N.F. Awang Iskandar

Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period. 


SAGE Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 215824402092335
Author(s):  
Rong Shi

Previous research has focused on documenting the perceptual mechanisms of facial expressions of so-called basic emotions; however, little is known about eye movement in terms of recognizing crying expressions. The present study aimed to clarify the visual pattern and the role of face gender in recognizing smiling and crying expressions. Behavioral reactions and fixations duration were recorded, and proportions of fixation counts and viewing time directed at facial features (eyes, nose, and mouth area) were calculated. Results indicated that crying expressions could be processed and recognized faster than that of smiling expressions. Across these expressions, eyes and nose area received more attention than mouth area, but in smiling facial expressions, participants fixated longer on the mouth area. It seems that proportional gaze allocation at facial features was quantitatively modulated by different expressions, but overall gaze distribution was qualitatively similar across crying and smiling facial expressions. Moreover, eye movements showed visual attention was modulated by the gender of faces: Participants looked longer at female faces with smiling expressions relative to male faces. Findings are discussed around the perceptual mechanisms underlying facial expressions recognition and the interaction between gender and expression processing.


2003 ◽  
Vol 14 (4) ◽  
pp. 373-376 ◽  
Author(s):  
Abigail A. Marsh ◽  
Hillary Anger Elfenbein ◽  
Nalini Ambady

We report evidence for nonverbal “accents,” subtle differences in the appearance of facial expressions of emotion across cultures. Participants viewed photographs of Japanese nationals and Japanese Americans in which posers' muscle movements were standardized to eliminate differences in expressions, cultural or otherwise. Participants guessed the nationality of posers displaying emotional expressions at above-chance levels, and with greater accuracy than they judged the nationality of the same posers displaying neutral expressions. These findings indicate that facial expressions of emotion can contain nonverbal accents that identify the expresser's nationality or culture. Cultural differences are intensified during the act of expressing emotion, rather than residing only in facial features or other static elements of appearance. This evidence suggests that extreme positions regarding the universality of emotional expressions are incomplete.


2017 ◽  
Vol 17 (14) ◽  
pp. 7 ◽  
Author(s):  
Justin Duncan ◽  
Frédéric Gosselin ◽  
Charlène Cobarro ◽  
Gabrielle Dugas ◽  
Caroline Blais ◽  
...  

2012 ◽  
Vol 25 (1) ◽  
pp. 105-110 ◽  
Author(s):  
Yohko Maki ◽  
Hiroshi Yoshida ◽  
Tomoharu Yamaguchi ◽  
Haruyasu Yamaguchi

ABSTRACTBackground:Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors.Methods:Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels.Results:In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions.Conclusions:In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.


Sign in / Sign up

Export Citation Format

Share Document