scholarly journals Dynamic facial expressions of emotion decouple emotion category and intensity information over time

2020 ◽  
Author(s):  
Chaona Chen ◽  
Daniel Messinger ◽  
Yaocong Duan ◽  
Robin A A Ince ◽  
Oliver G. B. Garrod ◽  
...  

Facial expressions support effective social communication by dynamically transmitting complex, multi-layered messages, such as emotion categories and their intensity. How facial expressions achieve this signalling task remains unknown. Here, we address this question by identifying the specific facial movements that convey two key components of emotion communication – emotion classification (such as ‘happy,’ ‘sad’) and intensification (such as ‘very strong’) – in the six classic emotions (happy, surprise, fear, disgust, anger and sad). Using a data-driven, reverse correlation approach and an information-theoretic analysis framework, we identified in 60 Western receivers three communicative functions of face movements: those used to classify the emotion (classifiers), to perceive emotional intensity (intensifiers), and those serving the dual role of classifier and intensifier. We then validated the communicative functions of these face movements in a broader set of 18 complex facial expressions of emotion (including excited, shame, anxious, hate). We find that the timing of emotion classifier and intensifier face movements are temporally distinct, in which intensifiers peaked earlier or later than classifiers. Together, these results reveal the complexities of facial expressions as a signalling system, in which individual face movements serve specific communicative functions with a clear temporal structure.

2018 ◽  
Vol 115 (43) ◽  
pp. E10013-E10021 ◽  
Author(s):  
Chaona Chen ◽  
Carlos Crivelli ◽  
Oliver G. B. Garrod ◽  
Philippe G. Schyns ◽  
José-Miguel Fernández-Dols ◽  
...  

Real-world studies show that the facial expressions produced during pain and orgasm—two different and intense affective experiences—are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


Author(s):  
Xia Fang ◽  
Disa Sauter ◽  
Marc Heerdink ◽  
Gerben van Kleef

There is a growing consensus that culture influences the perception of facial expressions of emotion. However, little is known about whether and how culture shapes the production of emotional facial expressions, and even less so about whether culture differentially shapes the production of posed versus spontaneous expressions. Drawing on prior work on cultural differences in emotional communication, we tested the prediction that people from the Netherlands (a historically heterogeneous culture where people are prone to low-context communication) produce facial expressions that are more distinct across emotions compared to people from China (a historically homogeneous culture where people are prone to high-context communication). Furthermore, we examined whether the degree of distinctiveness varies across posed and spontaneous expressions. Dutch and Chinese participants were instructed to either pose facial expressions of anger and disgust, or to share autobiographical events that elicited spontaneous expressions of anger or disgust. Using the complementary approaches of supervised machine learning and information-theoretic analysis of facial muscle movements, we show that posed and spontaneous facial expressions of anger and disgust were more distinct when produced by Dutch compared to Chinese participants. These findings shed new light on the role of culture in emotional communication by demonstrating, for the first time, effects on the distinctiveness of production of facial expressions.


Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.


2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2013 ◽  
Vol 27 (8) ◽  
pp. 1486-1494 ◽  
Author(s):  
Guillermo Recio ◽  
Annekathrin Schacht ◽  
Werner Sommer

2004 ◽  
Vol 20 (1) ◽  
pp. 81-91 ◽  
Author(s):  
Wataru Sato ◽  
Takanori Kochiyama ◽  
Sakiko Yoshikawa ◽  
Eiichi Naito ◽  
Michikazu Matsumura

2021 ◽  
Vol 14 (4) ◽  
pp. 4-22
Author(s):  
O.A. Korolkova ◽  
E.A. Lobodinskaya

In an experimental study, we explored the role of the natural or artificial character of expression and the speed of its exposure in the recognition of emotional facial expressions during stroboscopic presentation. In Series 1, participants identified emotions represented as sequences of frames from a video of a natural facial expression; in Series 2 participants were shown sequences of linear morph images. The exposure speed was varied. The results showed that at any exposure speed, the expressions of happiness and disgust were recognized most accurately. Longer presentation increased the accuracy of assessments of happiness, disgust, and surprise. Expression of surprise, demonstrated as a linear transformation, was recognized more efficiently than frames of natural expression of surprise. Happiness was perceived more accurately on video frames. The accuracy of the disgust recognition did not depend on the type of images. The qualitative nature of the stimuli and the speed of their presentation did not affect the accuracy of sadness recognition. The categorical structure of the perception of expressions was stable in any type of exposed images. The obtained results suggest a qualitative difference in the perception of natural and artificial images of expressions, which can be observed under extreme exposure conditions.


2020 ◽  
Author(s):  
Maddy Dyer ◽  
Angela Suzanne Attwood ◽  
Ian Penton-Voak ◽  
Marcus Robert Munafo

This paper has not yet been peer reviewed.


Sign in / Sign up

Export Citation Format

Share Document