Dynamic Properties Influence the Perception of Facial Expressions

Perception ◽  
10.1068/p3131 ◽  
2001 ◽  
Vol 30 (7) ◽  
pp. 875-887 ◽  
Author(s):  
Miyuki Kamachi ◽  
Vicki Bruce ◽  
Shigeru Mukaida ◽  
Jiro Gyoba ◽  
Sakiko Yoshikawa ◽  
...  

Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.


Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.



2015 ◽  
Vol 220-221 ◽  
pp. 3-8 ◽  
Author(s):  
Wacław Banaś ◽  
Aleksander Gwiazda ◽  
Krzysztof Herbuś ◽  
Gabriel Kost ◽  
Piotr Ociepka ◽  
...  

The simulator of behavior of a disabled person driving the car is especially useful equipment. Mainly, the hands and the face of a driver are observed in order to determine facial expressions, slow-motion of the head and eyes, which according to the description of the simulation indicate disturbances of concentration and, in extreme cases may lead to nausea, or even loss of consciousness [1]. So, in the realization phase of the configuration the control system, the need to maintain high standards of safety was taken into consideration. The main problem described in the paper was measuring the acceleration and frequency of vibration during the operation of the simulator [2, 3, 4]. This analysis will help to determine whether any particular circumstances during crash simulations do not exceed the acceleration limit while the driver still feels its effects. It is also important to examine the frequency of vibrations, which during long driving simulator can cause nausea, dizziness or loss of consciousness. This analysis is a part of widely applied CAx analysis performed using special computer platforms that should be properly organized [5] and helps to expand the range of investigations [6].



Psihologija ◽  
2011 ◽  
Vol 44 (1) ◽  
pp. 5-22 ◽  
Author(s):  
Ian Thornton ◽  
Emma Mullins ◽  
Kara Banahan

The face-inversion effect (FIE) refers to increased response times or error rates for faces that are presented upside-down relative to those seen in a canonical, upright orientation. Here we report one situation in which this FIE can be amplified when observers are shown dynamic facial expressions, rather than static facial expressions. In two experiments observers were asked to assign gender to a random sequence of un-degraded static or moving faces. Each face was seen both upright and inverted. For static images, this task led to little or no effect of inversion. For moving faces, the cost of inversion was a response time increase of approximately 100 ms relative to upright. Motion thus led to a disadvantage in the context of inversion. The fact that such motion could not be ignored in favour of available form cues suggests that dynamic processing may be mandatory. In two control experiments a difference between static and dynamic inversion was not observed for whole-body stimuli or for human-animal decisions. These latter findings suggest that the processing of upside-down movies is not always more difficult for the visual system than the processing of upside-down static images.



2018 ◽  
Vol 11 (1) ◽  
pp. 5-34 ◽  
Author(s):  
V.A. Barabanschikov ◽  
A.V. Zhegallo

A comparison of the parameters of oculomotor activity in the perception of static and dynamic images of the face was made. It is shown that in both static and dynamic conditions, the trajectory of eye movement is determined by the internal structure of the face of the and the functional connections of the facial areas of his face. Differences were found at the level of individual parameters of oculomotor activity: the duration of examination of the face zones, the duration of fixation and the amplitude of saccades. Routes for reviewing static images are fully cyclic in nature. Recurrent fixations in the same zone of interest are poorly expressed, and their contribution to the overall structure of movements is negligible. When perceiving dynamic images, the survey routes have a degenerate, partially reduced character, determined by the current dynamics of facial expressions; the contribution of repeated fixations in the same zone of interest increases.



1998 ◽  
Vol 9 (4) ◽  
pp. 270-276 ◽  
Author(s):  
Kari Edwards

Results of studies reported here indicate that humans are attuned to temporal cues in facial expressions of emotion. The experimental task required subjects to reproduce the actual progression of a target person's spontaneous expression (i.e., onset to offset) from a scrambled set of photographs. Each photograph depicted a segment of the expression that corresponded to approximately 67 ms in real time. Results of two experiments indicated that (a) individuals could detect extremely subtle dynamic cues in a facial expression and could utilize these cues to reproduce the proper temporal progression of the display at above-chance levels of accuracy; (b) women performed significantly better than men on the task designed to assess this ability; (c) individuals were most sensitive to the temporal characteristics of the early stages of an expression; and (d) accuracy was inversely related to the amount of time allotted for the task. The latter finding may reflect the relative involvement of (error-prone) cognitively mediated or strategic processes in what is normally a relatively automatic, nonconscious process.



2004 ◽  
Vol 15 (1-2) ◽  
pp. 23-34 ◽  
Author(s):  
Manas K. Mandal ◽  
Nalini Ambady

Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals.



Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 297-297
Author(s):  
Y Osada ◽  
Y Nagasaka ◽  
R Yamazaki

We recorded eye movements by the method of corneal reflection while ten subjects viewed schematic faces drawn by lines. Each subject viewed different emotional faces: happy, angry, sad, disgusted, interested, frightened, and surprised. We measured the subject's judgements in terms of percentage ‘correct’ and reaction time. Schematic faces were composed of the face outline contours and of the brow, eyes, nose, and mouth which could all be modified to produce particular expressions. By masking parts of the face, we examined which features would have the greatest effects on judgements of emotion. Subjects always gave a saccade to the eyes and fixated even when the eyes were not important for the judgement. They also gave a saccade to the centre of the face and fixated it even when only the mouth was presented. The presentation of only the brow decreased the correct rate on the expression of ‘surprise’ but played an important role in the ‘sad’ judgement. The ‘angry’ judgement depended significantly on the brow and mouth. The eyes contributed greatly to the ‘disgusted’ judgement. These results suggest that the judgement of facial expressions of emotion can be strongly affected by each part of the schematic face. The concentration of saccades on the centre of the face suggests that the ‘configuration balance’ of the face is also likely to be important.



2018 ◽  
Vol 115 (14) ◽  
pp. 3581-3586 ◽  
Author(s):  
Carlos F. Benitez-Quiroz ◽  
Ramprakash Srinivasan ◽  
Aleix M. Martinez

Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.



2018 ◽  
Author(s):  
Mariana R. Pereira ◽  
Tiago O. Paiva ◽  
Fernando Barbosa ◽  
Pedro R. Almeida ◽  
Eva C. Martins ◽  
...  

AbstractTypicality, or averageness, is one of the key features that influences face evaluation, but the role of this property in the perception of facial expressions of emotions is still not fully understood. Typical faces are usually considered more pleasant and trustworthy, and neuroimaging results suggest typicality modulates amygdala and fusiform activation, influencing face perception. At the same time, there is evidence that arousal is a key affective feature that modulates neural reactivity to emotional expressions. In this sense, it remains unclear whether the neural effects of typicality depend on altered perceptions of affect from facial expressions or if the effects of typicality and affect independently modulate face processing. The goal of this work was to dissociate the effects of typicality and affective properties, namely valence and arousal, in electrophysiological responses and self-reported ratings across several facial expressions of emotion. Two ERP components relevant for face processing were measured, the N170 and Vertex Positive Potential (VPP), complemented by subjective ratings of typicality, valence, and arousal, in a sample of 30 healthy young adults (21 female). The results point out to a modulation of the electrophysiological responses by arousal, regardless of the typicality or valence properties of the face. These findings suggest that previous findings of neural responses to typicality may be better explained by accounting for the subjective perception of arousal in facial expressions.



Sign in / Sign up

Export Citation Format

Share Document