scholarly journals The Influence of Postural Emotion Cues on Implicit Trait Judgements

2020 ◽  
Author(s):  
Tamara Van Der Zant ◽  
Jessica Reid ◽  
Catherine J. Mondloch ◽  
Nicole L. Nelson

Perceptions of traits (such as trustworthiness or dominance) are influenced by the emotion displayed on a face. For instance, the same individual is reported as more trustworthy when they look happy than when they look angry. This overextension of emotional expressions has been shown with facial expression but whether this phenomenon also occurs when viewing postural expressions was unknown. We sought to examine how expressive behaviour of the body would influence judgements of traits and how sensitivity to this cue develops. In the context of a storybook, adults (N = 35) and children (aged 5 to 8 years; N = 60) selected one of two partners to help face a challenge. The challenges required either a trustworthy or dominant partner. Participants chose between a partner with an emotional (happy/angry) face and neutral body or one with a neutral face and emotional body. As predicted, happy over neutral facial expressions were preferred when selecting a trustworthy partner and angry postural expressions were preferred over neutral when selecting a dominant partner. Children’s performance was not adult-like on most tasks. The results demonstrate that emotional postural expressions can also influence judgements of others’ traits, but that postural influence on trait judgements develops throughout childhood.

2018 ◽  
Vol 11 (4) ◽  
pp. 50-69 ◽  
Author(s):  
V.A. Barabanschikov ◽  
O.A. Korolkova ◽  
E.A. Lobodinskaya

We studied the perception of human facial emotional expressions during step-function stroboscopic presentation of changing mimics. Consecutive stages of each of the six basic facial expressions were pre sented to the participants: neutral face (300 ms) — expression of medium intensity (10—40 ms) — intense expression (30—120 ms) — expression of medium intensity (10—40 ms) — neutral face (100 ms). Alternative forced choice task was used to categorize the facial expressions. The results were compared to previous studies (Barabanschikov, Korolkova, Lobodinskaya, 2015; 2016), conducted using the same paradigm but with boxcar-function change of the expression: neutral face — intense expression — neutral face. We found that the dynamics of facial expression recognition, as well as errors and recognition time are almost identical in conditions of boxcar- and step-function presentation. One factor influencing the recognition rate is the proportion of presentation time of static (neutral) and changing (facial expression) aspects of the stimulus. In suboptimal conditions of facial expression perception (minimal presentation time of 10+30+10 ms and reduced intensity of expressions) we revealed stroboscopic sensibilization — a previously described phenomenon of enhanced recognition rate of low-attractive expressions (disgust, sadness, fear and anger), which has been previously found in conditions of boxcar-function presentation of expressions. We confirmed the similarity of influence of real and apparent motion on the recognition of basic facial emotional expressions.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0254905
Author(s):  
Satoshi Yagi ◽  
Yoshihiro Nakata ◽  
Yutaka Nakamura ◽  
Hiroshi Ishiguro

Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion “intense”, are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression “intense” was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot’s intentions or desires to humans.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Perception ◽  
2016 ◽  
Vol 46 (5) ◽  
pp. 624-631 ◽  
Author(s):  
Andreas M. Baranowski ◽  
H. Hecht

Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed “Kuleshov effect.” In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants’ emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.


2012 ◽  
Vol 25 (0) ◽  
pp. 46-47
Author(s):  
Kazumichi Matsumiya

Adaptation to a face belonging to a facial category, such as expression, causes a subsequently neutral face to be perceived as belonging to an opposite facial category. This is referred to as the face aftereffect (FAE) (Leopold et al., 2001; Rhodes et al., 2004; Webster et al., 2004). The FAE is generally thought of as being a visual phenomenon. However, recent studies have shown that humans can haptically recognize a face (Kilgour and Lederman, 2002; Lederman et al., 2007). Here, I investigated whether FAEs could occur in haptic perception of faces. Three types of facial expressions (happy, sad and neutral) were generated using a computer-graphics software, and three-dimensional masks of these faces were made from epoxy-cured resin for use in the experiments. An adaptation facemask was positioned on the left side of a table in front of the participant, and a test facemask was placed on the right. During adaptation, participants haptically explored the adaptation facemask with their eyes closed for 20 s, after which they haptically explored the test facemask for 5 s. Participants were then requested to classify the test facemask as either happy or sad. The experiment was performed under two adaptation conditions: (1) with adaptation to a happy facemask and (2) with adaptation to a sad facemask. In both cases, the expression of the test facemask was neutral. The results indicate that adaptation to a haptic face that belongs to a specific facial expression causes a subsequently touched neutral face to be perceived as having the opposite facial expression, suggesting that FAEs can be observed in haptic perception of faces.


2018 ◽  
Author(s):  
Adrienne Wood ◽  
Jared Martin ◽  
Martha W. Alibali ◽  
Paula Niedenthal

Recognition of affect expressed in the face is disrupted when the body expresses an incongruent affect. Existing research has documented such interference for universally recognizable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N = 62) viewed videos of facial expressions accompanied by hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesized that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N = 127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but perceivers do not seem to rely more on gestures when sensorimotor face processing is disrupted.


2015 ◽  
Vol 18 ◽  
Author(s):  
María Verónica Romero-Ferreiro ◽  
Luis Aguado ◽  
Javier Rodriguez-Torresano ◽  
Tomás Palomo ◽  
Roberto Rodriguez-Jimenez

AbstractDeficits in facial affect recognition have been repeatedly reported in schizophrenia patients. The hypothesis that this deficit is caused by poorly differentiated cognitive representation of facial expressions was tested in this study. To this end, performance of patients with schizophrenia and controls was compared in a new emotion-rating task. This novel approach allowed the participants to rate each facial expression at different times in terms of different emotion labels. Results revealed that patients tended to give higher ratings to emotion labels that did not correspond to the portrayed emotion, especially in the case of negative facial expressions (p < .001, η2 = .131). Although patients and controls gave similar ratings when the emotion label matched with the facial expression, patients gave higher ratings on trials with "incorrect" emotion labels (ps < .05). Comparison of patients and controls in a summary index of expressive ambiguity showed that patients perceived angry, fearful and happy faces as more emotionally ambiguous than did the controls (p < .001, η2 = .135). These results are consistent with the idea that the cognitive representation of emotional expressions in schizophrenia is characterized by less clear boundaries and a less close correspondence between facial configurations and emotional states.


2018 ◽  
Vol 32 (4) ◽  
pp. 160-171 ◽  
Author(s):  
Léonor Philip ◽  
Jean-Claude Martin ◽  
Céline Clavel

Abstract. People react with Rapid Facial Reactions (RFRs) when presented with human facial emotional expressions. Recent studies show that RFRs are not always congruent with emotional cues. The processes underlying RFRs are still being debated. In our study described herein, we manipulate the context of perception and its influence on RFRs. We use a subliminal affective priming task with emotional labels. Facial electromyography (EMG) (frontalis, corrugator, zygomaticus, and depressor) was recorded while participants observed static facial expressions (joy, fear, anger, sadness, and neutral expression) preceded/not preceded by a subliminal word (JOY, FEAR, ANGER, SADNESS, or NEUTRAL). For the negative facial expressions, when the priming word was congruent with the facial expression, participants displayed congruent RFRs (mimicry). When the priming word was incongruent, we observed a suppression of mimicry. Happiness was not affected by the priming word. RFRs thus appear to be modulated by the context and type of emotion that is presented via facial expressions.


2021 ◽  
Author(s):  
Jalil Rasgado-Toledo ◽  
Elizabeth Valles-Capetillo ◽  
Averi Giudicessi ◽  
Magda Giordano

Speakers use a variety of contextual information, such as facial emotional expressions for the successful transmission of their message. Listeners must decipher the meaning by understanding the intention behind it (Recanati, 1986). A traditional approach to the study of communicative intention has been through speech acts (Escandell, 2006). The objective of the present study is to further the understanding of the influence of facial expression to the recognition of communicative intention. The study sought to: verify the reliability of facial expressions recognition, find if there is an association between a facial expression and a category of speech acts, test if words contain an intentional load independent of the facial expression presented, and test whether facial expressions can modify an utterance’s communicative intention and the neural correlates associated using univariate and multivariate approaches. We found that previous observation of facial expressions associated with emotions can modify the interpretation of an assertive utterance that followed the facial expression. The hemodynamic brain response to an assertive utterance was moderated by the preceding facial expression and that classification based on the emotions expressed by the facial expression could be decoded by fluctuations in the brain’s hemodynamic response during the presentation of the assertive utterance. Neuroimaging data showed activation of regions involved in language, intentionality and face recognition during the utterance’s reading. Our results indicate that facial expression is a relevant contextual cue that decodes the intention of an utterance, and during decoding it engages different brain regions in agreement with the emotion expressed.


2018 ◽  
Vol 11 (2) ◽  
pp. 16-33 ◽  
Author(s):  
A.V. Zhegallo

The study investigates the specifics of recognition of emotional facial expressions in peripherally exposed facial expressions, while exposition time was shorter compared to the duration of the latent period of a saccade towards the exposed image. The study showed that recognition of peripherical perception reproduces the patterns of the choice of the incorrect responses. The mutual mistaken recognition is common for the facial expressions of a fear, anger and surprise. In the case of worsening of the conditions of recognition, calmness and grief as facial expression were included in the complex of a mutually mistakenly identified expressions. The identification of the expression of happiness deserves a special attention, because it can be mistakenly identified as different facial expression, but other expressions are never recognized as happiness. Individual accuracy of recognition varies from 0.29 to 0.80. The sufficient condition of a high accuracy in recognition was the recognition of the facial expressions using peripherical vision without making a saccade in the direction of the face image exposed.


Sign in / Sign up

Export Citation Format

Share Document