Suppression of Facial Mimicry of Negative Facial Expressions in an Incongruent Context

2018 ◽  
Vol 32 (4) ◽  
pp. 160-171 ◽  
Author(s):  
Léonor Philip ◽  
Jean-Claude Martin ◽  
Céline Clavel

Abstract. People react with Rapid Facial Reactions (RFRs) when presented with human facial emotional expressions. Recent studies show that RFRs are not always congruent with emotional cues. The processes underlying RFRs are still being debated. In our study described herein, we manipulate the context of perception and its influence on RFRs. We use a subliminal affective priming task with emotional labels. Facial electromyography (EMG) (frontalis, corrugator, zygomaticus, and depressor) was recorded while participants observed static facial expressions (joy, fear, anger, sadness, and neutral expression) preceded/not preceded by a subliminal word (JOY, FEAR, ANGER, SADNESS, or NEUTRAL). For the negative facial expressions, when the priming word was congruent with the facial expression, participants displayed congruent RFRs (mimicry). When the priming word was incongruent, we observed a suppression of mimicry. Happiness was not affected by the priming word. RFRs thus appear to be modulated by the context and type of emotion that is presented via facial expressions.

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 28-28
Author(s):  
A J Calder ◽  
A W Young ◽  
D Rowland ◽  
D R Gibbenson ◽  
B M Hayes ◽  
...  

G Rhodes, S E Brennan, S Carey (1987 Cognitive Psychology19 473 – 497) and P J Benson and D I Perrett (1991 European Journal of Cognitive Psychology3 105 – 135) have shown that computer-enhanced (caricatured) representations of familiar faces are named faster and rated as better likenesses than veridical (undistorted) representations. Here we have applied Benson and Perrett's graphic technique to examine subjects' perception of enhanced representations of photographic-quality facial expressions of basic emotions. To enhance a facial expression the target face is compared to a norm or prototype face, and, by exaggerating the differences between the two, a caricatured image is produced; reducing the differences results in an anticaricatured image. In experiment 1 we examined the effect of degree of caricature and types of norm on subjects' ratings for ‘intensity of expression’. Three facial expressions (fear, anger, and sadness) were caricatured at seven levels (−50%, −30%, −15%, 0%, +15%, +30%, and +50%) relative to three different norms; (1) an average norm prepared by blending pictures of six different emotional expressions; (2) a neutral expression norm; and (3) a different expression norm (eg anger caricatured relative to a happy expression). Irrespective of norm, the caricatured expressions were rated as significantly more intense than the veridical images. Furthermore, for the average and neutral norm sets, the anticaricatures were rated as significantly less intense. We also examined subjects' reaction times to recognise caricatured (−50%, 0%, and +50%) representations of six emotional facial expressions. The results showed that the caricatured images were identified fastest, followed by the veridical, and then anticaricatured images. Hence the perception of facial expression and identity is facilitated by caricaturing; this has important implications for the mental representation of facial expressions.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2008 ◽  
Vol 103 (1) ◽  
pp. 35-47 ◽  
Author(s):  
Ingo Wegener ◽  
Astrid Wawrzyniak ◽  
Katrin Imbierowicz ◽  
Rupert Conrad ◽  
Jochen Musch ◽  
...  

Attenuated affective processing is hypothesized to play a role in the development and maintenance of obesity. Using an affective priming task measuring automatic affective processing of verbal stimuli, a group of 30 obese participants in a weight-loss program at the Psychosomatic University Clinic Bonn ( M age = 48.3, SD = 10.7) was compared with a group of 25 participants of normal weight ( M age = 43.6, SD= 12.5). A smaller affective priming effect was observed for participants with obesity, indicating less pronounced reactions to valenced adjectives. The generally reduced affective processing in obese participants was discussed as a possible factor in the etiology of obesity. Individuals who generally show less pronounced affective reactions to a given stimulus may also react with less negative affect when confronted with weight gain or less positive affect when weight is lost. Consequently, they could be expected to be less motivated to stop overeating or to engage in dieting and will have a higher risk of becoming or staying obese.


Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


2021 ◽  
Vol 39 (5) ◽  
pp. 591-607
Author(s):  
Adrian Jusepeitis ◽  
Klaus Rothermund

Krause et al. (2012) demonstrated that evaluative responses elicited by self-related primes in an affective priming task have incremental validity over explicit self-esteem in predicting self-serving biases in performance estimations and expectations in an anagram task. We conducted a conceptual replication of their experiment in which we added a behavioral and an affective outcome and presented names instead of faces as self-related primes. A heterogeneous sample (N = 96) was recruited for an online data collection. Name primes produced significantly positive and reliable priming effects, which correlated with explicit self-esteem. However, neither these priming effects nor explicit self-esteem predicted the cognitive, affective, or behavioral outcomes. Despite the lack of predictive validity of the implicit measure for affective and behavioral outcomes, the positive and reliable priming effects produced by name primes warrant the further investigation of the validity of the affective priming paradigm as a measure of implicit self-esteem.


2015 ◽  
Vol 18 ◽  
Author(s):  
María Verónica Romero-Ferreiro ◽  
Luis Aguado ◽  
Javier Rodriguez-Torresano ◽  
Tomás Palomo ◽  
Roberto Rodriguez-Jimenez

AbstractDeficits in facial affect recognition have been repeatedly reported in schizophrenia patients. The hypothesis that this deficit is caused by poorly differentiated cognitive representation of facial expressions was tested in this study. To this end, performance of patients with schizophrenia and controls was compared in a new emotion-rating task. This novel approach allowed the participants to rate each facial expression at different times in terms of different emotion labels. Results revealed that patients tended to give higher ratings to emotion labels that did not correspond to the portrayed emotion, especially in the case of negative facial expressions (p < .001, η2 = .131). Although patients and controls gave similar ratings when the emotion label matched with the facial expression, patients gave higher ratings on trials with "incorrect" emotion labels (ps < .05). Comparison of patients and controls in a summary index of expressive ambiguity showed that patients perceived angry, fearful and happy faces as more emotionally ambiguous than did the controls (p < .001, η2 = .135). These results are consistent with the idea that the cognitive representation of emotional expressions in schizophrenia is characterized by less clear boundaries and a less close correspondence between facial configurations and emotional states.


2013 ◽  
Vol 16 ◽  
Author(s):  
Luis Aguado ◽  
Francisco J. Román ◽  
Sonia Rodríguez ◽  
Teresa Diéguez-Risco ◽  
Verónica Romero-Ferreiro ◽  
...  

AbstractThe possibility that facial expressions of emotion change the affective valence of faces through associative learning was explored using facial electromyography (EMG). In Experiment 1, EMG activity was registered while the participants (N = 57) viewed sequences of neutral faces (Stimulus 1 or S1) changing to either a happy or an angry expression (Stimulus 2 or S2). As a consequence of learning, participants who showed patterning of facial responses in the presence of angry and happy faces, that is, higher Corrugator Supercilii (CS) activity in the presence of angry faces and higher Zygomaticus Major (ZM) activity in the presence of happy faces, showed also a similar pattern when viewing the corresponding S1 faces. Explicit evaluations made by an independent sample of participants (Experiment 2) showed that evaluation of S1 faces was changed according to the emotional expression with which they had been associated. These results are consistent with an interpretation of rapid facial reactions to faces as affective responses that reflect the valence of the stimulus and that are sensitive to learned changes in the affective meaning of faces.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243929
Author(s):  
Siyu Jiang ◽  
Ming Peng ◽  
Xiaohui Wang

It has been widely accepted that moral violations that involve impurity (such as spitting in public) induce the emotion of disgust, but there has been a debate about whether moral violations that do not involve impurity (such as swearing in public) also induce the same emotion. The answer to this question may have implication for understanding where morality comes from and how people make moral judgments. This study aimed to compared the neural mechanisms underlying two kinds of moral violation by using an affective priming task to test the effect of sentences depicting moral violation behaviors with and without physical impurity on subsequent detection of disgusted faces in a visual search task. After reading each sentence, participants completed the face search task. Behavioral and electrophysiological (event-related potential, or ERP) indices of affective priming (P2, N400, LPP) and attention allocation (N2pc) were analyzed. Results of behavioral data and ERP data showed that moral violations both with and without impurity promoted the detection of disgusted faces (RT, N2pc); moral violations without impurity impeded the detection of neutral faces (N400). No priming effect was found on P2 and LPP. The results suggest both types of moral violation influenced the processing of disgusted faces and neutral faces, but the neural activity with temporal characteristics was different.


2020 ◽  
Vol 48 (1) ◽  
Author(s):  
Barbara Müller ◽  
Cis Thijssen

Does the NIX18-campaign influence implicit and explicit cognitions in adults? Research has shown that often, the effectiveness of anti-alcohol mass media campaigns is not experimentally tested, meaning that it is unclear whether such campaigns are successful in altering alcohol-related cognitions. Therefore, in the present study, we investigated whether the Dutch NIX18-campaign is successful in influencing implicit associations (measured with an affective priming task) and explicit cognitions (i.e., alcohol outcome expectancies) concerning alcohol. Additionally, a possible relationship with negative evaluations of the campaign and psychological reactance was investigated. Participants implicit and explicit cognitions were measured before they were presented with either three NIX18-campaign movies or no movies (control condition). Subsequently, their implicit and explicit cognitions were measured again. Results show that whether participants watched the movies or not had no influence on implicit associations but increased alcohol outcome expectancies. No effect on evaluation and reactance was found. Possible theoretical and practical explanations are discussed.


2021 ◽  
Author(s):  
Jalil Rasgado-Toledo ◽  
Elizabeth Valles-Capetillo ◽  
Averi Giudicessi ◽  
Magda Giordano

Speakers use a variety of contextual information, such as facial emotional expressions for the successful transmission of their message. Listeners must decipher the meaning by understanding the intention behind it (Recanati, 1986). A traditional approach to the study of communicative intention has been through speech acts (Escandell, 2006). The objective of the present study is to further the understanding of the influence of facial expression to the recognition of communicative intention. The study sought to: verify the reliability of facial expressions recognition, find if there is an association between a facial expression and a category of speech acts, test if words contain an intentional load independent of the facial expression presented, and test whether facial expressions can modify an utterance’s communicative intention and the neural correlates associated using univariate and multivariate approaches. We found that previous observation of facial expressions associated with emotions can modify the interpretation of an assertive utterance that followed the facial expression. The hemodynamic brain response to an assertive utterance was moderated by the preceding facial expression and that classification based on the emotions expressed by the facial expression could be decoded by fluctuations in the brain’s hemodynamic response during the presentation of the assertive utterance. Neuroimaging data showed activation of regions involved in language, intentionality and face recognition during the utterance’s reading. Our results indicate that facial expression is a relevant contextual cue that decodes the intention of an utterance, and during decoding it engages different brain regions in agreement with the emotion expressed.


Sign in / Sign up

Export Citation Format

Share Document