scholarly journals The lasting smell of emotions: The effects of reutilizing fear sweat samples

2020 ◽  
Vol 52 (6) ◽  
pp. 2438-2451 ◽  
Author(s):  
Nuno Gomes ◽  
Fábio Silva ◽  
Gün R. Semin

AbstractA growing body of research has shown that human apocrine sweat carries information about the emotional state of its donor. Exposure to sweat produced in a fear-inducing context triggers in its receivers a simulacrum of this emotional state, as evidenced by increased medial frontalis and corrugator supercilii (facial electromyography; fEMG) activity – two facial muscles involved in the display of fear facial expressions. However, despite the increased interest in the effects of emotional sweat, little is known about the properties of these chemical sweat samples. The goal of this study was to examine whether a second application of the same sweat sample would yield reliable results. Specifically, we assessed whether sweat samples collected from Portuguese males (N = 8) in fear (vs. neutral)-inducing contexts would produce similar fEMG activations (i.e., in the medial frontalis and corrugator supercilii) in female receivers (N = 60) across two independent applications (the first with Dutch and the second with Portuguese receivers). Our findings showed that exposure to fear (vs. neutral) sweat resulted in higher activation of both muscles compared with neutral odors, revealing a similar data pattern across the two applications and underlining the feasibility of reusing emotional sweat samples. The implications of these findings for properties of these sweat volatiles are discussed.

2018 ◽  
Author(s):  
Louisa Kulke ◽  
Dennis Feyerabend ◽  
Annekathrin Schacht

Human faces express emotions, informing others about their affective states. In order to measure expressions of emotion, facial Electromyography (EMG) has widely been used, requiring electrodes and technical equipment. More recently, emotion recognition software has been developed that detects emotions from video recordings of human faces. However, its validity and comparability to EMG measures is unclear. The aim of the current study was to compare the Affectiva Affdex emotion recognition software by iMotions with EMG measurements of the zygomaticus mayor and corrugator supercilii muscle, concerning its ability to identify happy, angry and neutral faces. Twenty participants imitated these facial expressions while videos and EMG were recorded. Happy and angry expressions were detected by both the software and by EMG above chance, while neutral expressions were more often falsely identified as negative by EMG compared to the software. Overall, EMG and software values correlated highly. In conclusion, Affectiva Affdex software can identify emotions and its results are comparable to EMG findings.


2021 ◽  
Author(s):  
Yael Hanein

Facial-expressions play a major role in human communication and provide a window to an individual’s emotional state. While facial expressions can be consciously manipulated to conceal true emotions, very brief leaked expressions may occur, exposing one’s true internal state. Leaked expressions are therefore considered as an important hallmark in deception detection, a field with enormous social and economic impact. Challengingly, capturing these subtle and brief expressions has been so far limited to visual examination (manual or machine-based), with almost no electromyography evidence. In this investigation we set to explore whether electromyography of leaked expressions can be faithfully recorded with specially designed wearable electrodes. Indeed, using soft multi-electrode array based facial electromyography, we were able to record localized and brief signals in individuals instructed to suppress smiles. The electromyography evidence was validated with high-speed video recordings. The recording approach reported here provides a new and sensitive tool for leaked expression investigations and a basis for improved automated systems.


2013 ◽  
Vol 16 ◽  
Author(s):  
Luis Aguado ◽  
Francisco J. Román ◽  
Sonia Rodríguez ◽  
Teresa Diéguez-Risco ◽  
Verónica Romero-Ferreiro ◽  
...  

AbstractThe possibility that facial expressions of emotion change the affective valence of faces through associative learning was explored using facial electromyography (EMG). In Experiment 1, EMG activity was registered while the participants (N = 57) viewed sequences of neutral faces (Stimulus 1 or S1) changing to either a happy or an angry expression (Stimulus 2 or S2). As a consequence of learning, participants who showed patterning of facial responses in the presence of angry and happy faces, that is, higher Corrugator Supercilii (CS) activity in the presence of angry faces and higher Zygomaticus Major (ZM) activity in the presence of happy faces, showed also a similar pattern when viewing the corresponding S1 faces. Explicit evaluations made by an independent sample of participants (Experiment 2) showed that evaluation of S1 faces was changed according to the emotional expression with which they had been associated. These results are consistent with an interpretation of rapid facial reactions to faces as affective responses that reflect the valence of the stimulus and that are sensitive to learned changes in the affective meaning of faces.


i-Perception ◽  
2018 ◽  
Vol 9 (4) ◽  
pp. 204166951878652 ◽  
Author(s):  
Leonor Philip ◽  
Jean-Claude Martin ◽  
Céline Clavel

Facial expressions of emotion provide relevant cues for understanding social interactions and the affective processes involved in emotion perception. Virtual human faces are useful for conducting controlled experiments. However, little is known regarding the possible differences between physiological responses elicited by virtual versus real human facial expressions. The aim of the current study was to determine if virtual and real emotional faces elicit the same rapid facial reactions for the perception of facial expressions of joy, anger, and sadness. Facial electromyography (corrugator supercilii, zygomaticus major, and depressor anguli) was recorded in 30 participants during the presentation of dynamic or static and virtual or real faces. For the perception of dynamic facial expressions of joy and anger, analyses of electromyography data revealed that rapid facial reactions were stronger when participants were presented with real faces compared with virtual faces. These results suggest that the processes underlying the perception of virtual versus real emotional faces might differ.


Psych ◽  
2021 ◽  
Vol 3 (2) ◽  
pp. 48-60
Author(s):  
Peter Walla ◽  
Aimee Mavratzakis

Recognising our own and others’ emotions is vital for healthy social development. The aim of the current study was to determine how emotions related to the self or to another influence behavioural expressions of emotion. Facial electromyography (EMG) was used to record spontaneous facial muscle activity in nineteen participants while they passively viewed negative, positive and neutral emotional pictures during three blocks of referential instructions. Each participant imagined themself, another person or no one experiencing the emotional scenario, with the priming words “You”, “Him” or “None” presented before each picture for the respective block of instructions. Emotion awareness (EA) was also recorded using the TAS-20 alexithymia questionnaire. Corrugator supercilii (cs) muscle activity increased significantly between 500 and 1000 ms post stimulus onset during negative and neutral picture presentations, regardless of ownership. Independent of emotion, cs activity was greatest during the “no one” task and lowest during the “self” task from less than 250 to 1000 ms. Interestingly, the degree of cs activation during referential tasks was further modulated by EA. Low EA corresponded to significantly stronger cs activity overall compared with high EA, and this effect was even more pronounced during the “no one” task. The findings suggest that cognitive processes related to the perception of emotion ownership can influence spontaneous facial muscle activity, but that a greater degree of integration between higher cognitive and lower affective levels of information may interrupt or suppress these behavioural expressions of emotion.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2019 ◽  
Author(s):  
Andy Arnold

As social beings, humans harbor an evolved capacity for loneliness—perceived social isolation. Feelings of loneliness are associated with aberrant affective and social processing, as well as deleterious physiological dysregulation. We investigated how loneliness affects spontaneous facial mimicry (SFM), an interpersonal resonance mechanism involved in social connection and emotional contagion. We used facial electromyography (fEMG) to measure activity of the zygomaticus major (“smiling muscle”) and corrugator supercilii (“frowning muscle”) while participants viewed emotional stimuli, such as video clips of actors expressing anger, fear, sadness, or joy, and emotional IAPS images. We also measured self-reported loneliness, depression, and extraversion levels. Evidence for SFM was found in greater fEMG activity of the zygomaticus and corrugator to positive and negative emotions, respectively. However, individuals reporting higher levels of loneliness lacked SFM for expressions of joy. Loneliness did not affect deliberate mimicry activity to the same expressions, or spontaneous reactions to positive, negative, or neutral IAPS images. Depression and extraversion did not predict any differences in fEMG responses. We argue that impaired automaticity of “smiling back” at another—a faulty interpersonal resonance response—represents a pervasive behavioral mechanism that likely contributes to negative social and emotional consequences of loneliness and may facilitate loneliness contagion.


Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Chun-Ting Hsu ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Abstract Facial expression is an integral aspect of non-verbal communication of affective information. Earlier psychological studies have reported that the presentation of prerecorded photographs or videos of emotional facial expressions automatically elicits divergent responses, such as emotions and facial mimicry. However, such highly controlled experimental procedures may lack the vividness of real-life social interactions. This study incorporated a live image relay system that delivered models’ real-time performance of positive (smiling) and negative (frowning) dynamic facial expressions or their prerecorded videos to participants. We measured subjective ratings of valence and arousal and facial electromyography (EMG) activity in the zygomaticus major and corrugator supercilii muscles. Subjective ratings showed that the live facial expressions were rated to elicit higher valence and more arousing than the corresponding videos for positive emotion conditions. Facial EMG data showed that compared with the video, live facial expressions more effectively elicited facial muscular activity congruent with the models’ positive facial expressions. The findings indicate that emotional facial expressions in live social interactions are more evocative of emotional reactions and facial mimicry than earlier experimental data have suggested.


Sign in / Sign up

Export Citation Format

Share Document