scholarly journals Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants

2016 ◽  
Vol 283 (1844) ◽  
pp. 20161948 ◽  
Author(s):  
Tomoko Isomura ◽  
Tamami Nakano

Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present.

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Alexandre C. Fernandes ◽  
Teresa Garcia-Marques

AbstractTime perception relies on the motor system. Involves core brain regions of this system, including those associated with feelings generated from sensorimotor states. Perceptual timing is also distorted when movement occurs during timing tasks, possibly by interfering with sensorimotor afferent feedback. However, it is unknown if the perception of time is an active process associated with specific patterns of muscle activity. We explored this idea based on the phenomenon of electromyographic gradients, which consists of the dynamic increase of muscle activity during cognitive tasks that require sustained attention, a critical function in perceptual timing. We aimed to determine whether facial muscle dynamic activity indexes the subjective representation of time. We asked participants to judge stimuli durations (varying in familiarity) while we monitored the time course of the activity of the zygomaticus-major and corrugator-supercilii muscles, both associated with cognitive and affective feelings. The dynamic electromyographic activity in corrugator-supercilii over time reflected objective time and this relationship predicted subjective judgments of duration. Furthermore, the zygomaticus-major muscle signaled the bias that familiarity introduces in duration judgments. This suggests that subjective duration could be an embodiment process based in motor information changing over time and their associated feelings.


2021 ◽  
Vol 12 ◽  
Author(s):  
Marc A. Nordmann ◽  
Ralf Schäfer ◽  
Tobias Müller ◽  
Matthias Franz

Facial mimicry is the automatic tendency to imitate facial expressions of emotions. Alexithymia is associated with a reduced facial mimicry ability to affect expressions of adults. There is evidence that the baby schema may influence this process. In this study it was tested experimentally whether facial mimicry of the alexithymic group (AG) is different from the control group (CG) in response to dynamic facial affect expressions of children and adults. A multi-method approach (20-point Toronto Alexithymia Scale and Toronto Structured Interview for Alexithymia) was used for assessing levels of alexithymia. From 3503 initial data sets, two groups of 38 high and low alexithymic individuals without relevant mental or physical diseases were matched regarding age, gender, and education. Facial mimicry was induced by presentation of naturalistic affect-expressive video sequences (fear, sadness, disgust, anger, and joy) taken from validated sets of faces from adults (Averaged Karolinska Directed Emotional Faces) and children (Picture-Set of Young Children’s Affective Facial Expressions). The videos started with a neutral face and reached maximum affect expression within 2 s. The responses of the groups were measured by facial electromyographic activity (fEMG) of corrugator supercilii and zygomaticus major muscles. Differences in fEMG response (4000 ms) were tested in a variance analytical model. There was one significant main effect for the factor emotion and four interaction effects for the factors group × age, muscle × age, muscle × emotion, and for the triple interaction muscle × age × emotion. The participants of AG showed a decreased fEMG activity in response to the presented faces of adults compared to the CG but not for the faces of children. The affect-expressive faces of children induced enhanced zygomatic and reduced corrugator muscle activity in both groups. Despite existing deficits in the facial mimicry of alexithymic persons, affect-expressive faces of children seem to trigger a stronger positive emotional involvement even in the AG.


2012 ◽  
Vol 30 (4) ◽  
pp. 361-367 ◽  
Author(s):  
Lisa P. Chan ◽  
Steven R. Livingstone ◽  
Frank A. Russo

We examined facial responses to audio-visual presentations of emotional singing. Although many studies have now found evidence for facial responses to emotional stimuli, most have involved static facial expressions and none have involved singing. Singing represents a dynamic ecologically valid emotional stimulus with unique demands on orofacial motion that are independent of emotion, related to pitch and linguistic production. Observers’ facial muscles were recorded with electromyography while they saw and heard recordings of a vocalist’s performance sung with different emotional intentions (happy, neutral, and sad). Audio-visual presentations successfully elicited facial mimicry in observers that were congruent with the performer’s intended emotions. Happy singing performances elicited increased activity in the zygomaticus major muscle region of observers, while sad performances evoked increased activity in the corrugator supercilii muscle region. These spontaneous facial muscle responses occurred within the first three seconds following onset of video presentation indicating that emotional nuances of singing performances can elicit dynamic facial responses from observers.


2021 ◽  
Author(s):  
Natalia Albuquerque ◽  
Daniel S. Mills ◽  
Kun Guo ◽  
Anna Wilkinson ◽  
Briseida Resende

AbstractThe ability to infer emotional states and their wider consequences requires the establishment of relationships between the emotional display and subsequent actions. These abilities, together with the use of emotional information from others in social decision making, are cognitively demanding and require inferential skills that extend beyond the immediate perception of the current behaviour of another individual. They may include predictions of the significance of the emotional states being expressed. These abilities were previously believed to be exclusive to primates. In this study, we presented adult domestic dogs with a social interaction between two unfamiliar people, which could be positive, negative or neutral. After passively witnessing the actors engaging silently with each other and with the environment, dogs were given the opportunity to approach a food resource that varied in accessibility. We found that the available emotional information was more relevant than the motivation of the actors (i.e. giving something or receiving something) in predicting the dogs’ responses. Thus, dogs were able to access implicit information from the actors’ emotional states and appropriately use the affective information to make context-dependent decisions. The findings demonstrate that a non-human animal can actively acquire information from emotional expressions, infer some form of emotional state and use this functionally to make decisions.


2016 ◽  
Vol 12 (1) ◽  
pp. 20150883 ◽  
Author(s):  
Natalia Albuquerque ◽  
Kun Guo ◽  
Anna Wilkinson ◽  
Carine Savalli ◽  
Emma Otta ◽  
...  

The perception of emotional expressions allows animals to evaluate the social intentions and motivations of each other. This usually takes place within species; however, in the case of domestic dogs, it might be advantageous to recognize the emotions of humans as well as other dogs. In this sense, the combination of visual and auditory cues to categorize others' emotions facilitates the information processing and indicates high-level cognitive representations. Using a cross-modal preferential looking paradigm, we presented dogs with either human or dog faces with different emotional valences (happy/playful versus angry/aggressive) paired with a single vocalization from the same individual with either a positive or negative valence or Brownian noise. Dogs looked significantly longer at the face whose expression was congruent to the valence of vocalization, for both conspecifics and heterospecifics, an ability previously known only in humans. These results demonstrate that dogs can extract and integrate bimodal sensory emotional information, and discriminate between positive and negative emotions from both humans and dogs.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xiaoxiao Li

In the natural environment, facial and bodily expressions influence each other. Previous research has shown that bodily expressions significantly influence the perception of facial expressions. However, little is known about the cognitive processing of facial and bodily emotional expressions and its temporal characteristics. Therefore, this study presented facial and bodily expressions, both separately and together, to examine the electrophysiological mechanism of emotional recognition using event-related potential (ERP). Participants assessed the emotions of facial and bodily expressions that varied by valence (positive/negative) and consistency (matching/non-matching emotions). The results showed that bodily expressions induced a more positive P1 component and a shortened latency, whereas facial expressions triggered a more negative N170 and prolonged latency. Among N2 and P3, N2 was more sensitive to inconsistent emotional information and P3 was more sensitive to consistent emotional information. The cognitive processing of facial and bodily expressions had distinctive integrating features, with the interaction occurring in the early stage (N170). The results of the study highlight the importance of facial and bodily expressions in the cognitive processing of emotion recognition.


2007 ◽  
Vol 2 (3-4) ◽  
pp. 167-178 ◽  
Author(s):  
Lindsay M. Oberman ◽  
Piotr Winkielman ◽  
Vilayanur S. Ramachandran

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Chun-Ting Hsu ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Abstract Facial expression is an integral aspect of non-verbal communication of affective information. Earlier psychological studies have reported that the presentation of prerecorded photographs or videos of emotional facial expressions automatically elicits divergent responses, such as emotions and facial mimicry. However, such highly controlled experimental procedures may lack the vividness of real-life social interactions. This study incorporated a live image relay system that delivered models’ real-time performance of positive (smiling) and negative (frowning) dynamic facial expressions or their prerecorded videos to participants. We measured subjective ratings of valence and arousal and facial electromyography (EMG) activity in the zygomaticus major and corrugator supercilii muscles. Subjective ratings showed that the live facial expressions were rated to elicit higher valence and more arousing than the corresponding videos for positive emotion conditions. Facial EMG data showed that compared with the video, live facial expressions more effectively elicited facial muscular activity congruent with the models’ positive facial expressions. The findings indicate that emotional facial expressions in live social interactions are more evocative of emotional reactions and facial mimicry than earlier experimental data have suggested.


2012 ◽  
Vol 24 (7) ◽  
pp. 1806-1821
Author(s):  
Bernard M. C. Stienen ◽  
Konrad Schindler ◽  
Beatrice de Gelder

Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.


Sign in / Sign up

Export Citation Format

Share Document