scholarly journals Face Value and Cheap Talk: How Smiles Can Increase or Decrease the Credibility of Our Words

2018 ◽  
Vol 16 (4) ◽  
pp. 147470491881440 ◽  
Author(s):  
Lawrence Ian Reed ◽  
Rachel Stratton ◽  
Jessica D. Rambeas

How do our facial expressions affect the credibility of our words? We test whether smiles, either uninhibited or inhibited, affect the credibility of a written statement. Participants viewed a confederate partner displaying a neutral expression, non-Duchenne smile, Duchenne smile, or controlled smile, paired with a written statement. Participants then made a behavioral decision based on how credible they perceived the confederate’s statement to be. Compared to a neutral expression, Experiment 1 found that participants were more likely to believe the confederate’s statement when it was paired with a deliberate Duchenne smile and less likely to believe the confederate’s statement when it was paired with a deliberate controlled smile. Experiment 2 replicated these findings with spontaneously emitted expressions. These findings provide evidence that uninhibited facial expressions can increase the credibility accompanying statements, while inhibited ones can decrease credibility.

Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.


Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Kris Evers ◽  
Inneke Kerkhof ◽  
Jean Steyaert ◽  
Ilse Noens ◽  
Johan Wagemans

Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD.


2013 ◽  
Vol 113 (1) ◽  
pp. 199-216 ◽  
Author(s):  
Marcella L. Woud ◽  
Eni S. Becker ◽  
Wolf-Gero Lange ◽  
Mike Rinck

A growing body of evidence shows that the prolonged execution of approach movements towards stimuli and avoidance movements away from them affects their evaluation. However, there has been no systematic investigation of such training effects. Therefore, the present study compared approach-avoidance training effects on various valenced representations of one neutral (Experiment 1, N = 85), angry (Experiment 2, N = 87), or smiling facial expressions (Experiment 3, N = 89). The face stimuli were shown on a computer screen, and by means of a joystick, participants pulled half of the faces closer (positive approach movement), and pushed the other half away (negative avoidance movement). Only implicit evaluations of neutral-expression were affected by the training procedure. The boundary conditions of such approach-avoidance training effects are discussed.


Author(s):  
Jenni Anttonen ◽  
Veikko Surakka ◽  
Mikko Koivuluoma

The aim of the present paper was to study heart rate changes during a video stimulation depicting two actors (male and female) producing dynamic facial expressions of happiness, sadness, and a neutral expression. We measured ballistocardiographic emotion-related heart rate responses with an unobtrusive measurement device called the EMFi chair. Ratings of subjective responses to the video stimuli were also collected. The results showed that the video stimuli evoked significantly different ratings of emotional valence and arousal. Heart rate decelerated in response to all stimuli and the deceleration was the strongest during negative stimulation. Furthermore, stimuli from the male actor evoked significantly larger arousal ratings and heart rate responses than the stimuli from the female actor. The results also showed differential responding between female and male participants. The present results support the hypothesis that heart rate decelerates in response to films depicting dynamic negative facial expressions. The present results also support the idea that the EMFi chair can be used to perceive emotional responses from people while they are interacting with technology.


Perception ◽  
10.1068/p3319 ◽  
2003 ◽  
Vol 32 (7) ◽  
pp. 813-826 ◽  
Author(s):  
Frank E Pollick ◽  
Harold Hill ◽  
Andrew Calder ◽  
Helena Paterson

We examined how the recognition of facial emotion was influenced by manipulation of both spatial and temporal properties of 3-D point-light displays of facial motion. We started with the measurement of 3-D position of multiple locations on the face during posed expressions of anger, happiness, sadness, and surprise, and then manipulated the spatial and temporal properties of the measurements to obtain new versions of the movements. In two experiments, we examined recognition of these original and modified facial expressions: in experiment 1, we manipulated the spatial properties of the facial movement, and in experiment 2 we manipulated the temporal properties. The results of experiment 1 showed that exaggeration of facial expressions relative to a fixed neutral expression resulted in enhanced ratings of the intensity of that emotion. The results of experiment 2 showed that changing the duration of an expression had a small effect on ratings of emotional intensity, with a trend for expressions with shorter durations to have lower ratings of intensity. The results are discussed within the context of theories of encoding as related to caricature and emotion.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0246001
Author(s):  
Patricia Fernández-Sotos ◽  
Arturo S. García ◽  
Miguel A. Vicente-Querol ◽  
Guillermo Lahera ◽  
Roberto Rodriguez-Jimenez ◽  
...  

The ability to recognise facial emotions is essential for successful social interaction. The most common stimuli used when evaluating this ability are photographs. Although these stimuli have proved to be valid, they do not offer the level of realism that virtual humans have achieved. The objective of the present paper is the validation of a new set of dynamic virtual faces (DVFs) that mimic the six basic emotions plus the neutral expression. The faces are prepared to be observed with low and high dynamism, and from front and side views. For this purpose, 204 healthy participants, stratified by gender, age and education level, were recruited for assessing their facial affect recognition with the set of DVFs. The accuracy in responses was compared with the already validated Penn Emotion Recognition Test (ER-40). The results showed that DVFs were as valid as standardised natural faces for accurately recreating human-like facial expressions. The overall accuracy in the identification of emotions was higher for the DVFs (88.25%) than for the ER-40 faces (82.60%). The percentage of hits of each DVF emotion was high, especially for neutral expression and happiness emotion. No statistically significant differences were discovered regarding gender. Nor were significant differences found between younger adults and adults over 60 years. Moreover, there is an increase of hits for avatar faces showing a greater dynamism, as well as front views of the DVFs compared to their profile presentations. DVFs are as valid as standardised natural faces for accurately recreating human-like facial expressions of emotions.


2018 ◽  
Vol 32 (4) ◽  
pp. 160-171 ◽  
Author(s):  
Léonor Philip ◽  
Jean-Claude Martin ◽  
Céline Clavel

Abstract. People react with Rapid Facial Reactions (RFRs) when presented with human facial emotional expressions. Recent studies show that RFRs are not always congruent with emotional cues. The processes underlying RFRs are still being debated. In our study described herein, we manipulate the context of perception and its influence on RFRs. We use a subliminal affective priming task with emotional labels. Facial electromyography (EMG) (frontalis, corrugator, zygomaticus, and depressor) was recorded while participants observed static facial expressions (joy, fear, anger, sadness, and neutral expression) preceded/not preceded by a subliminal word (JOY, FEAR, ANGER, SADNESS, or NEUTRAL). For the negative facial expressions, when the priming word was congruent with the facial expression, participants displayed congruent RFRs (mimicry). When the priming word was incongruent, we observed a suppression of mimicry. Happiness was not affected by the priming word. RFRs thus appear to be modulated by the context and type of emotion that is presented via facial expressions.


2015 ◽  
Vol 11 (2) ◽  
pp. 183-196 ◽  
Author(s):  
Maria Guarnera ◽  
Zira Hichy ◽  
Maura I. Cascio ◽  
Stefano Carrubba

This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction ‘gender x face region’ only for anger and neutral emotions.


2019 ◽  
Vol 1 (1) ◽  
pp. 28
Author(s):  
Ferri Susanto

The title of this research is An educational perspective is an Analysis of Facial Expressions on Joking Internet  at the Social Media. The objective of this research was the Analysis of Facial Expressions, that involves : “Derp Face”, “Derpina Face”, “Troll Face”, “Fuuuu Face”, “Forever Alone”, “LOL Face”, “Me Gusta Face”, “Okay Face”, and “Poker Face”  the limited on the research only took the Data Febuary 2017. The design of this research was descriptive research. This method used to get the description about facial expressions by analyzing, interpreting and concluding. After that the researcher analyzed the data, so the researcher concluded that the Analysis of Facial Expressions on Internet such as: 1)“Derp Face” indicating neutral expression, 2)“Derpina Face” indicating neutral expression, 3)“Troll Face” indicating feel glad, 4)“Fuuuu Face” indicated that someone feels angry, 5)“Forever Alone” indicated that someone feels sad, and Alone,  6)“LOL Face” indicated that someone feels glad, 7)“Me Gusta” indicated that someone feels like, 8)“Okay Face” indicated someone feels sad, 9)“Poker Face” indicated that someone feels no specific emotion. And the last, the researcher suggested that the study about semiotics is supposed to give understanding and knowledge about signs. The results of this research could be as a reference about how to analysis facial expressions could analysis by eyebrows, forehead, eyes, nose, cheeks and skin. This research is also supposed to be a reference  for the next researcher as well.


Sign in / Sign up

Export Citation Format

Share Document