scholarly journals Learning of Facial Responses to Faces Associated with Positive or Negative Emotional Expressions

2013 ◽  
Vol 16 ◽  
Author(s):  
Luis Aguado ◽  
Francisco J. Román ◽  
Sonia Rodríguez ◽  
Teresa Diéguez-Risco ◽  
Verónica Romero-Ferreiro ◽  
...  

AbstractThe possibility that facial expressions of emotion change the affective valence of faces through associative learning was explored using facial electromyography (EMG). In Experiment 1, EMG activity was registered while the participants (N = 57) viewed sequences of neutral faces (Stimulus 1 or S1) changing to either a happy or an angry expression (Stimulus 2 or S2). As a consequence of learning, participants who showed patterning of facial responses in the presence of angry and happy faces, that is, higher Corrugator Supercilii (CS) activity in the presence of angry faces and higher Zygomaticus Major (ZM) activity in the presence of happy faces, showed also a similar pattern when viewing the corresponding S1 faces. Explicit evaluations made by an independent sample of participants (Experiment 2) showed that evaluation of S1 faces was changed according to the emotional expression with which they had been associated. These results are consistent with an interpretation of rapid facial reactions to faces as affective responses that reflect the valence of the stimulus and that are sensitive to learned changes in the affective meaning of faces.

2018 ◽  
Author(s):  
Louisa Kulke ◽  
Dennis Feyerabend ◽  
Annekathrin Schacht

Human faces express emotions, informing others about their affective states. In order to measure expressions of emotion, facial Electromyography (EMG) has widely been used, requiring electrodes and technical equipment. More recently, emotion recognition software has been developed that detects emotions from video recordings of human faces. However, its validity and comparability to EMG measures is unclear. The aim of the current study was to compare the Affectiva Affdex emotion recognition software by iMotions with EMG measurements of the zygomaticus mayor and corrugator supercilii muscle, concerning its ability to identify happy, angry and neutral faces. Twenty participants imitated these facial expressions while videos and EMG were recorded. Happy and angry expressions were detected by both the software and by EMG above chance, while neutral expressions were more often falsely identified as negative by EMG compared to the software. Overall, EMG and software values correlated highly. In conclusion, Affectiva Affdex software can identify emotions and its results are comparable to EMG findings.


i-Perception ◽  
2018 ◽  
Vol 9 (4) ◽  
pp. 204166951878652 ◽  
Author(s):  
Leonor Philip ◽  
Jean-Claude Martin ◽  
Céline Clavel

Facial expressions of emotion provide relevant cues for understanding social interactions and the affective processes involved in emotion perception. Virtual human faces are useful for conducting controlled experiments. However, little is known regarding the possible differences between physiological responses elicited by virtual versus real human facial expressions. The aim of the current study was to determine if virtual and real emotional faces elicit the same rapid facial reactions for the perception of facial expressions of joy, anger, and sadness. Facial electromyography (corrugator supercilii, zygomaticus major, and depressor anguli) was recorded in 30 participants during the presentation of dynamic or static and virtual or real faces. For the perception of dynamic facial expressions of joy and anger, analyses of electromyography data revealed that rapid facial reactions were stronger when participants were presented with real faces compared with virtual faces. These results suggest that the processes underlying the perception of virtual versus real emotional faces might differ.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Chun-Ting Hsu ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Abstract Facial expression is an integral aspect of non-verbal communication of affective information. Earlier psychological studies have reported that the presentation of prerecorded photographs or videos of emotional facial expressions automatically elicits divergent responses, such as emotions and facial mimicry. However, such highly controlled experimental procedures may lack the vividness of real-life social interactions. This study incorporated a live image relay system that delivered models’ real-time performance of positive (smiling) and negative (frowning) dynamic facial expressions or their prerecorded videos to participants. We measured subjective ratings of valence and arousal and facial electromyography (EMG) activity in the zygomaticus major and corrugator supercilii muscles. Subjective ratings showed that the live facial expressions were rated to elicit higher valence and more arousing than the corresponding videos for positive emotion conditions. Facial EMG data showed that compared with the video, live facial expressions more effectively elicited facial muscular activity congruent with the models’ positive facial expressions. The findings indicate that emotional facial expressions in live social interactions are more evocative of emotional reactions and facial mimicry than earlier experimental data have suggested.


2002 ◽  
Vol 14 (2) ◽  
pp. 210-227 ◽  
Author(s):  
S. Campanella ◽  
P. Quinet ◽  
R. Bruyer ◽  
M. Crommelinck ◽  
J.-M. Guerit

Behavioral studies have shown that two different morphed faces perceived as reflecting the same emotional expression are harder to discriminate than two faces considered as two different ones. This advantage of between-categorical differences compared with within-categorical ones is classically referred as the categorical perception effect. The temporal course of this effect on fear and happiness facial expressions has been explored through event-related potentials (ERPs). Three kinds of pairs were presented in a delayed same–different matching task: (1) two different morphed faces perceived as the same emotional expression (within-categorical differences), (2) two other ones reflecting two different emotions (between-categorical differences), and (3) two identical morphed faces (same faces for methodological purpose). Following the second face onset in the pair, the amplitude of the bilateral occipito-temporal negativities (N170) and of the vertex positive potential (P150 or VPP) was reduced for within and same pairs relative to between pairs. This suggests a repetition priming effect. We also observed a modulation of the P3b wave, as the amplitude of the responses for the between pairs was higher than for the within and same pairs. These results indicate that the categorical perception of human facial emotional expressions has a perceptual origin in the bilateral occipito-temporal regions, while typical prior studies found emotion-modulated ERP components considerably later.


2014 ◽  
pp. 7-23
Author(s):  
Michela Balconi ◽  
Giovanni Lecci ◽  
Verdiana Trapletti

The present paper explored the relationship between emotional facial response and electromyographic modulation in children when they observe facial expression of emotions. Facial responsiveness (evaluated by arousal and valence ratings) and psychophysiological correlates (facial electromyography, EMG) were analyzed when children looked at six facial expressions of emotions (happiness, anger, fear, sadness, surprise and disgust). About EMG measure, corrugator and zygomatic muscle activity was monitored in response to different emotional types. ANOVAs showed differences for both EMG and facial response across the subjects, as a function of different emotions. Specifically, some emotions were well expressed by all the subjects (such as happiness, anger and fear) in terms of high arousal, whereas some others were less level arousal (such as sadness). Zygomatic activity was increased mainly for happiness, from one hand, corrugator activity was increased mainly for anger, fear and surprise, from the other hand. More generally, EMG and facial behavior were highly correlated each other, showing a "mirror" effect with respect of the observed faces.


2003 ◽  
Vol 14 (4) ◽  
pp. 373-376 ◽  
Author(s):  
Abigail A. Marsh ◽  
Hillary Anger Elfenbein ◽  
Nalini Ambady

We report evidence for nonverbal “accents,” subtle differences in the appearance of facial expressions of emotion across cultures. Participants viewed photographs of Japanese nationals and Japanese Americans in which posers' muscle movements were standardized to eliminate differences in expressions, cultural or otherwise. Participants guessed the nationality of posers displaying emotional expressions at above-chance levels, and with greater accuracy than they judged the nationality of the same posers displaying neutral expressions. These findings indicate that facial expressions of emotion can contain nonverbal accents that identify the expresser's nationality or culture. Cultural differences are intensified during the act of expressing emotion, rather than residing only in facial features or other static elements of appearance. This evidence suggests that extreme positions regarding the universality of emotional expressions are incomplete.


2021 ◽  
Author(s):  
Evrim Gulbetekin

Abstract This investigation used three experiments to test the effect of mask use and other-race effect (ORE) on face perception in three contexts: (a) face recognition, (b) recognition of facial expressions, and (c) social distance. The first, which involved a matching-to-sample paradigm, tested Caucasian subjects with either masked or unmasked faces using Caucasian and Asian samples. The participants exhibited the best performance in recognizing an unmasked face condition and the poorest when asked to recognize a masked face that they had seen earlier without a mask. Accuracy was also poorer for Asian faces than Caucasian faces. The second experiment presented Asian or Caucasian faces having different emotional expressions, with and without masks. The results for this task, which involved identifying which emotional expression the participants had seen on the presented face, indicated that emotion recognition performance decreased for faces portrayed with masks. The emotional expressions ranged from the most accurately to least accurately recognized as follows: happy, neutral, disgusted, and fearful. Emotion recognition performance was poorer for Asian stimuli compared to Caucasian. Experiment 3 used the same participants and stimuli and asked participants to indicate the social distance they would prefer to observe with each pictured person. The participants preferred a wider social distance with unmasked faces compared to masked faces. Social distance also varied by the portrayed emotion: ranging from farther to closer as follows: disgusted, fearful, neutral, and happy. Race was also a factor; participants preferred wider social distance for Asian compared to Caucasian faces. Altogether, our findings indicated that during the COVID-19 pandemic face perception and social distance were affected by mask use, ORE.


Author(s):  
Diana Kayser ◽  
Hauke Egermann ◽  
Nick E. Barraclough

AbstractAn abundance of studies on emotional experiences in response to music have been published over the past decades, however, most have been carried out in controlled laboratory settings and rely on subjective reports. Facial expressions have been occasionally assessed but measured using intrusive methods such as facial electromyography (fEMG). The present study investigated emotional experiences of fifty participants in a live concert. Our aims were to explore whether automated face analysis could detect facial expressions of emotion in a group of people in an ecologically valid listening context, to determine whether emotions expressed by the music predicted specific facial expressions and examine whether facial expressions of emotion could be used to predict subjective ratings of pleasantness and activation. During the concert, participants were filmed and facial expressions were subsequently analyzed with automated face analysis software. Self-report on participants’ subjective experience of pleasantness and activation were collected after the concert for all pieces (two happy, two sad). Our results show that the pieces that expressed sadness resulted in more facial expressions of sadness (compared to happiness), whereas the pieces that expressed happiness resulted in more facial expressions of happiness (compared to sadness). Differences for other facial expression categories (anger, fear, surprise, disgust, and neutral) were not found. Independent of the musical piece or emotion expressed in the music facial expressions of happiness predicted ratings of subjectively felt pleasantness, whilst facial expressions of sadness and disgust predicted low and high ratings of subjectively felt activation, respectively. Together, our results show that non-invasive measurements of audience facial expressions in a naturalistic concert setting are indicative of emotions expressed by the music, and the subjective experiences of the audience members themselves.


2018 ◽  
Vol 32 (4) ◽  
pp. 160-171 ◽  
Author(s):  
Léonor Philip ◽  
Jean-Claude Martin ◽  
Céline Clavel

Abstract. People react with Rapid Facial Reactions (RFRs) when presented with human facial emotional expressions. Recent studies show that RFRs are not always congruent with emotional cues. The processes underlying RFRs are still being debated. In our study described herein, we manipulate the context of perception and its influence on RFRs. We use a subliminal affective priming task with emotional labels. Facial electromyography (EMG) (frontalis, corrugator, zygomaticus, and depressor) was recorded while participants observed static facial expressions (joy, fear, anger, sadness, and neutral expression) preceded/not preceded by a subliminal word (JOY, FEAR, ANGER, SADNESS, or NEUTRAL). For the negative facial expressions, when the priming word was congruent with the facial expression, participants displayed congruent RFRs (mimicry). When the priming word was incongruent, we observed a suppression of mimicry. Happiness was not affected by the priming word. RFRs thus appear to be modulated by the context and type of emotion that is presented via facial expressions.


Sign in / Sign up

Export Citation Format

Share Document