scholarly journals Seeing Mixed Emotions: The Specificity of Emotion Perception From Static and Dynamic Facial Expressions Across Cultures

2017 ◽  
Vol 49 (1) ◽  
pp. 130-148 ◽  
Author(s):  
Xia Fang ◽  
Disa A. Sauter ◽  
Gerben A. Van Kleef

Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese and Dutch observers viewing facial expressions: emotions which were morphologically similar to the intended emotion and emotions which were morphologically dissimilar to the intended emotion. Findings were consistent across two studies and showed that (a) morphologically similar emotions were endorsed to a greater extent than dissimilar emotions and (b) Chinese observers endorsed nonintended emotions more than did Dutch observers. Furthermore, the difference between Chinese and Dutch observers was more pronounced for the endorsement of morphologically similar emotions than of dissimilar emotions. We also obtained consistent evidence that Dutch observers endorsed nonintended emotions that were congruent with the preceding expressions to a greater degree. These findings suggest that culture and morphological similarity both influence the extent to which perceivers see several emotions in a facial expression.

2009 ◽  
Vol 2009 ◽  
pp. 1-13 ◽  
Author(s):  
Ali Arya ◽  
Steve DiPaola ◽  
Avi Parush

This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”


Traditio ◽  
2014 ◽  
Vol 69 ◽  
pp. 125-145
Author(s):  
Kirsten Wolf

The human face has the capacity to generate expressions associated with a wide range of affective states. Despite the fact that there are few words to describe human facial behaviors, the facial muscles allow for more than a thousand different facial appearances. Some examples of feelings that can be expressed are anger, concentration, contempt, excitement, nervousness, and surprise. Regardless of culture or language, the same expressions are associated with the same emotions and vary only in intensity. Using modern psychological analyses as a point of departure, this essay examines descriptions of human facial expressions as well as such bodily “symptoms” as flushing, turning pale, and weeping in Old Norse-Icelandic literature. The aim is to analyze the manner in which facial signs are used as a means of non-verbal communication to convey the impression of an individual's internal state to observers. More specifically, this essay seeks to determine when and why characters in these works are described as expressing particular facial emotions and, especially, the range of emotions expressed. The Sagas andþættirof Icelanders are in the forefront of the analysis and yield well over one hundred references to human facial expression and color. The examples show that through gaze, smiling, weeping, brows that are raised or knitted, and coloration, the Sagas andþættirof Icelanders tell of happiness or amusement, pleasant and unpleasant surprise, fear, anger, rage, sadness, interest, concern, and even mixed emotions for which language has no words. The Sagas andþættirof Icelanders may be reticent in talking about emotions and poor in emotional vocabulary, but this poverty is compensated for by making facial expressions signifiers of emotion. This essay makes clear that the works are less emotionally barren than often supposed. It also shows that our understanding of Old Norse-Icelandic “somatic semiotics” may well depend on the universality of facial expressions and that culture-specific “display rules” or “elicitors” are virtually nonexistent.


Author(s):  
Hongxu Wei ◽  
Ping Liu

The construction of sustainable urban forests follows the principle that well-being in people is promoted when exposed to tree population. Facial expression is the direct representation of inner emotion that can be used to assess real-time perception in urban forests. The emergence and change of facial expressions for forest visitors are in an implicit process. The reserved character of oriental races strengthens the requirement for the accuracy to recognize expressions through instrument rating. In this study, a dataset was established with 2,886 randomly photographed faces from visitors in a constructed urban forest park and a promenade at summertime in Shenyang City at Northeast China. Six experts were invited to choose 160 photos in total with 20 images representing one of eight typical expressions as angry, contempt, disgusted, happy, neutral, sad, scared, and surprised emotions. The FireFACE ver. 3.0 software was used to test hit-ratio validation as the accuracy (ac.) to match machine-recognized photos with those identified by experts. According to Kruskal-Wallis test on the difference from averaged scores in 20 recently published papers, contempt (ac.=0.40%, P=0.0038) and scared (ac.=25.23%, P=0.0018) expressions cannot pass the validation test. Therefore, the FireFACE can be used as an instrument to analyze facial expression from oriental people in urban forests but contempt and scared expressions cannot be identified.


2017 ◽  
Vol 2 (2) ◽  
pp. 130-134
Author(s):  
Jarot Dwi Prasetyo ◽  
Zaehol Fatah ◽  
Taufik Saleh

In recent years it appears interest in the interaction between humans and computers. Facial expressions play a fundamental role in social interaction with other humans. In two human communications is only 7% of communication due to language linguistic message, 38% due to paralanguage, while 55% through facial expressions. Therefore, to facilitate human machine interface more friendly on multimedia products, the facial expression recognition on interface very helpful in interacting comfort. One of the steps that affect the facial expression recognition is the accuracy in facial feature extraction. Several approaches to facial expression recognition in its extraction does not consider the dimensions of the data as input features of machine learning Through this research proposes a wavelet algorithm used to reduce the dimension of data features. Data features are then classified using SVM-multiclass machine learning to determine the difference of six facial expressions are anger, hatred, fear of happy, sad, and surprised Jaffe found in the database. Generating classification obtained 81.42% of the 208 sample data.


2019 ◽  
Vol 12 (1) ◽  
pp. 27-39
Author(s):  
D.V. Lucin ◽  
Y.A. Kozhukhova ◽  
E.A. Suchkova

Emotion congruence in emotion perception is manifested in increasing sensitivity to the emotions corresponding to the perceiver’s emotional state. In this study, an experimental procedure that robustly generates emotion congruence during the perception of ambiguous facial expressions has been developed. It was hypothesized that emotion congruence will be stronger in the early stages of perception. In two experiments, happiness and sadness were elicited in 69 (mean age 20.2, 57 females) and 58 (mean age 18.2, 50 females) participants. Then they determined what emotions were present in the ambiguous faces. The duration of stimulus presentation varied for the analysis of earlier and later stages of perception. The effect of emotion congruence was obtained in both experiments: happy participants perceived more happiness and less sadness in ambiguous facial expression compared to sad participants. Stimulus duration did not influence emotion congruence. Further studies should focus on the juxtaposition of the models connecting the emotion congruence mechanisms either with perception or with response generation.


2016 ◽  
Vol 75 (4) ◽  
pp. 175-181 ◽  
Author(s):  
Alisdair J. G. Taylor ◽  
Louise Bryant

Abstract. Emotion perception studies typically explore how judgments of facial expressions are influenced by invariant characteristics such as sex or by variant characteristics such as gaze. However, few studies have considered the importance of factors that are not easily categorized as invariant or variant. We investigated one such factor, attractiveness, and the role it plays in judgments of emotional expression. We asked 26 participants to categorize different facial expressions (happy, neutral, and angry) that varied with respect to facial attractiveness (attractive, unattractive). Participants were significantly faster when judging expressions on attractive as compared to unattractive faces, but there was no interaction between facial attractiveness and facial expression, suggesting that the attractiveness of a face does not play an important role in the judgment of happy or angry facial expressions.


2021 ◽  
Vol 12 ◽  
Author(s):  
Fan Mo ◽  
Jingjin Gu ◽  
Ke Zhao ◽  
Xiaolan Fu

Facial expression recognition plays a crucial role in understanding the emotion of people, as well as in social interaction. Patients with major depressive disorder (MDD) have been repeatedly reported to be impaired in recognizing facial expressions. This study aimed to investigate the confusion effects between two facial expressions that presented different emotions and to compare the difference of confusion effect for each emotion pair between patients with MDD and healthy controls. Participants were asked to judge the emotion category of each facial expression in a two-alternative forced choice paradigm. Six basic emotions (i.e., happiness, fear, sadness, anger, surprise, and disgust) were examined in pairs, resulting in 15 emotion combinations. Results showed that patients with MDD were impaired in the recognition of all basic facial expressions except for the happy expression. Moreover, patients with MDD were more inclined to confuse a negative emotion (i.e., anger and disgust) with another emotion as compared to healthy controls. These findings highlight the importance that patients with MDD show a deficit of sensitivity in distinguishing specific two facial expressions.


2021 ◽  
Author(s):  
Jianxin Wang ◽  
Craig Poskanzer ◽  
Stefano Anzellotti

Facial expressions are critical in our daily interactions. Studying how humans recognize dynamic facial expressions is an important area of research in social perception, but advancements are hampered by the difficulty of creating well-controlled stimuli. Research on the perception of static faces has made significant progress thanks to techniques that make it possible to generate synthetic face stimuli. However, synthetic dynamic expressions are more difficult to generate; methods that yield realistic dynamics typically rely on the use of infrared markers applied on the face, making it expensive to create datasets that include large numbers of different expressions. In addition, the use of markers might interfere with facial dynamics. In this paper, we contribute a new method to generate large amounts of realistic and well-controlled facial expression videos. We use a deep convolutional neural network with attention and asymmetric loss to extract the dynamics of action units from videos, and demonstrate that this approach outperforms a baseline model based on convolutional neural networks without attention on the same stimuli. Next, we develop a pipeline to use the action unit dynamics to render realistic synthetic videos. This pipeline makes it possible to generate large scale naturalistic and controllable facial expression datasets to facilitate future research in social cognitive science.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


Sign in / Sign up

Export Citation Format

Share Document