Emotion-specific effects of facial expressions and postures on emotional experience.

1989 ◽  
Vol 57 (1) ◽  
pp. 100-108 ◽  
Author(s):  
Sandra E. Duclos ◽  
James D. Laird ◽  
Eric Schneider ◽  
Melissa Sexter ◽  
et al
2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Bo Yu ◽  
Lin Ma ◽  
Haifeng Li ◽  
Lun Zhao ◽  
Hongjian Bo ◽  
...  

Estimation of human emotions from Electroencephalogram (EEG) signals plays a vital role in affective Brain Computer Interface (BCI). The present study investigated the different event-related synchronization (ERS) and event-related desynchronization (ERD) of typical brain oscillations in processing Facial Expressions under nonattentional condition. The results show that the lower-frequency bands are mainly used to update Facial Expressions and distinguish the deviant stimuli from the standard ones, whereas the higher-frequency bands are relevant to automatically processing different Facial Expressions. Accordingly, we set up the relations between each brain oscillation and processing unattended Facial Expressions by the measures of ERD and ERS. This research first reveals the contributions of each frequency band for comprehension of Facial Expressions in preattentive stage. It also evidences that participants have emotional experience under nonattentional condition. Therefore, the user’s emotional state under nonattentional condition can be recognized in real time by the ERD/ERS computation indexes of different frequency bands of brain oscillations, which can be used in affective BCI to provide the user with more natural and friendly ways.


2010 ◽  
Vol 33 (6) ◽  
pp. 437-438 ◽  
Author(s):  
Pablo Briñol ◽  
Kenneth G. DeMarree ◽  
K. Rachelle Smith

AbstractThe embodied simulation of smiles involves motor activity that often changes the perceivers' own emotional experience (e.g., smiling can make us feel happy). Although Niedenthal et al. mention this possibility, the psychological processes by which embodiment changes emotions and their consequences for processing other emotions are not discussed in the target article's review. We argue that understanding the processes initiated by embodiment is important for a complete understanding of the effects of embodiment on emotion perception.


This project presents a system to automatically detect emotional dichotomy and mixed emotional experience using a Linux based system. Facial expressions, head movements and facial gestures were captured from pictorial input in order to create attributes such as distance, coordinates and movement of tracked points. Web camera is used to extract spectral attributes. Features are calculated using Fisherface algorithm. Emotion detected by cascade classifier and feature level fusion was used to create a combined feature vector. Live actions of user are to be used for recording emotions. As per calculated result system will play songs and display books list.


2021 ◽  
Author(s):  
Arianne Constance Herrera-Bennett ◽  
Shermain Puah ◽  
Lisa Hasenbein ◽  
Dirk Wildgruber

The current study investigated whether automatic integration of crossmodal stimuli (i.e. facial emotions and emotional prosody) facilitated or impaired the intake and retention of unattended verbal content. The study borrowed from previous bimodal integration designs and included a two-alternative forced-choice (2AFC) task, where subjects were instructed to identify the emotion of a face (as either ‘angry’ or ‘happy’) while ignoring a concurrently presented sentence (spoken in an angry, happy, or neutral prosody), after which a surprise recall was administered to investigate effects on semantic content retention. While bimodal integration effects were replicated (i.e. faster and more accurate emotion identification under congruent conditions), congruency effects were not found for semantic recall. Overall, semantic recall was better for trials with emotional (vs. neutral) faces, and worse in trials with happy (vs. angry or neutral) prosody. Taken together, our findings suggest that when individuals focus their attention on evaluation of facial expressions, they implicitly integrate nonverbal emotional vocal cues (i.e. hedonic valence or emotional tone of accompanying sentences), and devote less attention to their semantic content. While the impairing effect of happy prosody on recall may indicate an emotional interference effect, more research is required to uncover potential prosody-specific effects. All supplemental online materials can be found on OSF (https://osf.io/am9p2/).


2021 ◽  
Author(s):  
Jennifer Quinde Zlibut ◽  
Anabil Munshi ◽  
Gautam Biswas ◽  
Carissa Cascio

Abstract Background: It is unclear whether atypical patterns of facial expression production metrics in autism reflect the dynamic and nuanced nature of facial expressions or a true diagnostic difference. Further, the heterogeneity observed across autism symptomatology suggests a need for more adaptive and personalized social skills programs. For example, it would be useful to have a better understanding of the different expressiveness profiles within the autistic population and how they differ from neurotypicals to help develop systems that train facial expression production and reception. Methods:We used automated facial coding and an unsupervised clustering approach to limit inter-individual variability in facial expression production that may have otherwise obscured group differences in previous studies, allowing an "apples-to-apples" comparison between autistic and neurotypical adults. Specifically, we applied k-means clustering to identify subtypes of facial expressiveness in an autism group (N=27) and a neurotypical control group (N=57) separately. The two most stable clusters from these analyses were then further characterized and compared on the basis of their expressiveness and emotive congruence to emotionally charged stimuli. Results: Our main finding was that autistic adults show heightened spontaneous facial expressions in response to negative emotional images. The group effect did not extend to positive emotional images, and we did not find evidence for greater incongruous (i.e., inappropriate) facial expressions in autism. Conclusion: These findings build on previous work suggesting valence-specific effects of autism on emotional empathy and suggest the need for intervention programs to focus on social skills in the context of both negative and positive emotions.


Author(s):  
Linda A. Camras ◽  
Vanessa L. Castro ◽  
Amy G. Halberstadt ◽  
Michael M. Shuster

This chapter explores the question of whether infants and children produce prototypic emotional facial expressions in emotion-eliciting situations. Investigations of both infants and children are described. These include a natural observation study of a single infant during routine caregiving activities, a systematic experiment in which infants were presented with elicitors of fear and anger, a seminaturalistic experiment during which mothers and children discuss a topic of disagreement, and a study of children’s responses to a fear stimulus presented in the context of an Internet prank. Together these studies show that prototypic expressions are sometimes produced when it is unlikely that the corresponding emotion is experienced and often are not produced when the corresponding emotional experience seems likely. Overall findings suggest that the relationship between emotion and facial expression is more complex than portrayed within contemporary discrete emotion theories.


2002 ◽  
Vol 8 (1) ◽  
pp. 130-135 ◽  
Author(s):  
JOCELYN M. KEILLOR ◽  
ANNA M. BARRETT ◽  
GREGORY P. CRUCIAN ◽  
SARAH KORTENKAMP, ◽  
KENNETH M. HEILMAN

The facial feedback hypothesis suggests that facial expressions are either necessary or sufficient to produce emotional experience. Researchers have noted that the ideal test of the necessity aspect of this hypothesis would be an evaluation of emotional experience in a patient suffering from a bilateral facial paralysis; however, this condition is rare and no such report has been documented. We examined the role of facial expressions in the determination of emotion by studying a patient (F.P.) suffering from a bilateral facial paralysis. Despite her inability to convey emotions through facial expressions, F.P. reported normal emotional experience. When F.P. viewed emotionally evocative slides her reactions were not dampened relative to the normative sample. F.P. retained her ability to detect, discriminate, and image emotional expressions. These findings are not consistent with theories stating that feedback from an active face is necessary to experience emotion, or to process emotional facial expressions. (JINS, 2002, 8, 130–135.)


2021 ◽  
Vol 9 ◽  
Author(s):  
Annika Krause ◽  
Christian Nawroth

Emotions are an essential part of how we experience our world. Humans can express emotions by telling others how we feel—but what about animals? How can we tell whether they experience emotions and, if they do, which ones? When we think about the animals under human care, it is not only scientifically interesting but also ethically important to understand how these animals experience their worlds. Over the last 20 years, researchers have made considerable progress by identifying ways to assess emotions in animals. For example, researchers can look at the facial expressions of animals, record their vocalisations, or measure body processes such as changes in the heartbeat or hormone concentrations in the blood. This information can tell us more about how animals feel, why and how emotions have evolved, and what we, as humans, share with animals in our emotional experience of the world around us.


2020 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
Lowell Gaertner ◽  
Brooke Frohlich ◽  
Jeff T. Larsen ◽  
Dana Basnight-Brown

The facial feedback hypothesis suggests that an individual’s facial expressions can influence their emotional experience (e.g., that smiling can make one feel happier). However, a reoccurring concern is that demand characteristics drive this effect. Across three experiments (n = 250, 192, 131), university students in the United States and Kenya posed happy, angry, and neutral expressions and self-reported their emotions following a demand characteristics manipulation. To manipulate demand characteristics we either (a) told participants we hypothesized their poses would influence their emotions, (b) told participants we hypothesized their poses would not influence their emotions, or (c) did not tell participants a hypothesis. Results indicated that demand characteristics moderated the effects of facial poses on self-reported emotion. However, facial poses still influenced self-reported emotion when participants were told we hypothesized their poses would not influence emotion. These results indicate that facial feedback effects are not solely an artifact of demand characteristics.


Sign in / Sign up

Export Citation Format

Share Document