The Glow Still Shows: Effects of Facial Masking on Perceptions of Duchenne Versus Social Smiles

Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.

2013 ◽  
Vol 113 (1) ◽  
pp. 199-216 ◽  
Author(s):  
Marcella L. Woud ◽  
Eni S. Becker ◽  
Wolf-Gero Lange ◽  
Mike Rinck

A growing body of evidence shows that the prolonged execution of approach movements towards stimuli and avoidance movements away from them affects their evaluation. However, there has been no systematic investigation of such training effects. Therefore, the present study compared approach-avoidance training effects on various valenced representations of one neutral (Experiment 1, N = 85), angry (Experiment 2, N = 87), or smiling facial expressions (Experiment 3, N = 89). The face stimuli were shown on a computer screen, and by means of a joystick, participants pulled half of the faces closer (positive approach movement), and pushed the other half away (negative avoidance movement). Only implicit evaluations of neutral-expression were affected by the training procedure. The boundary conditions of such approach-avoidance training effects are discussed.


2020 ◽  
Author(s):  
Chaona Chen ◽  
Daniel Messinger ◽  
Yaocong Duan ◽  
Robin A A Ince ◽  
Oliver G. B. Garrod ◽  
...  

Facial expressions support effective social communication by dynamically transmitting complex, multi-layered messages, such as emotion categories and their intensity. How facial expressions achieve this signalling task remains unknown. Here, we address this question by identifying the specific facial movements that convey two key components of emotion communication – emotion classification (such as ‘happy,’ ‘sad’) and intensification (such as ‘very strong’) – in the six classic emotions (happy, surprise, fear, disgust, anger and sad). Using a data-driven, reverse correlation approach and an information-theoretic analysis framework, we identified in 60 Western receivers three communicative functions of face movements: those used to classify the emotion (classifiers), to perceive emotional intensity (intensifiers), and those serving the dual role of classifier and intensifier. We then validated the communicative functions of these face movements in a broader set of 18 complex facial expressions of emotion (including excited, shame, anxious, hate). We find that the timing of emotion classifier and intensifier face movements are temporally distinct, in which intensifiers peaked earlier or later than classifiers. Together, these results reveal the complexities of facial expressions as a signalling system, in which individual face movements serve specific communicative functions with a clear temporal structure.


Perception ◽  
10.1068/p3319 ◽  
2003 ◽  
Vol 32 (7) ◽  
pp. 813-826 ◽  
Author(s):  
Frank E Pollick ◽  
Harold Hill ◽  
Andrew Calder ◽  
Helena Paterson

We examined how the recognition of facial emotion was influenced by manipulation of both spatial and temporal properties of 3-D point-light displays of facial motion. We started with the measurement of 3-D position of multiple locations on the face during posed expressions of anger, happiness, sadness, and surprise, and then manipulated the spatial and temporal properties of the measurements to obtain new versions of the movements. In two experiments, we examined recognition of these original and modified facial expressions: in experiment 1, we manipulated the spatial properties of the facial movement, and in experiment 2 we manipulated the temporal properties. The results of experiment 1 showed that exaggeration of facial expressions relative to a fixed neutral expression resulted in enhanced ratings of the intensity of that emotion. The results of experiment 2 showed that changing the duration of an expression had a small effect on ratings of emotional intensity, with a trend for expressions with shorter durations to have lower ratings of intensity. The results are discussed within the context of theories of encoding as related to caricature and emotion.


1998 ◽  
Vol 9 (4) ◽  
pp. 270-276 ◽  
Author(s):  
Kari Edwards

Results of studies reported here indicate that humans are attuned to temporal cues in facial expressions of emotion. The experimental task required subjects to reproduce the actual progression of a target person's spontaneous expression (i.e., onset to offset) from a scrambled set of photographs. Each photograph depicted a segment of the expression that corresponded to approximately 67 ms in real time. Results of two experiments indicated that (a) individuals could detect extremely subtle dynamic cues in a facial expression and could utilize these cues to reproduce the proper temporal progression of the display at above-chance levels of accuracy; (b) women performed significantly better than men on the task designed to assess this ability; (c) individuals were most sensitive to the temporal characteristics of the early stages of an expression; and (d) accuracy was inversely related to the amount of time allotted for the task. The latter finding may reflect the relative involvement of (error-prone) cognitively mediated or strategic processes in what is normally a relatively automatic, nonconscious process.


2004 ◽  
Vol 15 (1-2) ◽  
pp. 23-34 ◽  
Author(s):  
Manas K. Mandal ◽  
Nalini Ambady

Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals.


Perception ◽  
10.1068/p3131 ◽  
2001 ◽  
Vol 30 (7) ◽  
pp. 875-887 ◽  
Author(s):  
Miyuki Kamachi ◽  
Vicki Bruce ◽  
Shigeru Mukaida ◽  
Jiro Gyoba ◽  
Sakiko Yoshikawa ◽  
...  

Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 297-297
Author(s):  
Y Osada ◽  
Y Nagasaka ◽  
R Yamazaki

We recorded eye movements by the method of corneal reflection while ten subjects viewed schematic faces drawn by lines. Each subject viewed different emotional faces: happy, angry, sad, disgusted, interested, frightened, and surprised. We measured the subject's judgements in terms of percentage ‘correct’ and reaction time. Schematic faces were composed of the face outline contours and of the brow, eyes, nose, and mouth which could all be modified to produce particular expressions. By masking parts of the face, we examined which features would have the greatest effects on judgements of emotion. Subjects always gave a saccade to the eyes and fixated even when the eyes were not important for the judgement. They also gave a saccade to the centre of the face and fixated it even when only the mouth was presented. The presentation of only the brow decreased the correct rate on the expression of ‘surprise’ but played an important role in the ‘sad’ judgement. The ‘angry’ judgement depended significantly on the brow and mouth. The eyes contributed greatly to the ‘disgusted’ judgement. These results suggest that the judgement of facial expressions of emotion can be strongly affected by each part of the schematic face. The concentration of saccades on the centre of the face suggests that the ‘configuration balance’ of the face is also likely to be important.


2018 ◽  
Vol 115 (14) ◽  
pp. 3581-3586 ◽  
Author(s):  
Carlos F. Benitez-Quiroz ◽  
Ramprakash Srinivasan ◽  
Aleix M. Martinez

Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.


2018 ◽  
Author(s):  
Mariana R. Pereira ◽  
Tiago O. Paiva ◽  
Fernando Barbosa ◽  
Pedro R. Almeida ◽  
Eva C. Martins ◽  
...  

AbstractTypicality, or averageness, is one of the key features that influences face evaluation, but the role of this property in the perception of facial expressions of emotions is still not fully understood. Typical faces are usually considered more pleasant and trustworthy, and neuroimaging results suggest typicality modulates amygdala and fusiform activation, influencing face perception. At the same time, there is evidence that arousal is a key affective feature that modulates neural reactivity to emotional expressions. In this sense, it remains unclear whether the neural effects of typicality depend on altered perceptions of affect from facial expressions or if the effects of typicality and affect independently modulate face processing. The goal of this work was to dissociate the effects of typicality and affective properties, namely valence and arousal, in electrophysiological responses and self-reported ratings across several facial expressions of emotion. Two ERP components relevant for face processing were measured, the N170 and Vertex Positive Potential (VPP), complemented by subjective ratings of typicality, valence, and arousal, in a sample of 30 healthy young adults (21 female). The results point out to a modulation of the electrophysiological responses by arousal, regardless of the typicality or valence properties of the face. These findings suggest that previous findings of neural responses to typicality may be better explained by accounting for the subjective perception of arousal in facial expressions.


Sign in / Sign up

Export Citation Format

Share Document