scholarly journals The face never lies: facial expressions and mimicry modulate playful interactions in wild geladas

2022 ◽  
Vol 76 (1) ◽  
Author(s):  
Alessandro Gallo ◽  
Anna Zanoli ◽  
Marta Caselli ◽  
Ivan Norscia ◽  
Elisabetta Palagi

Abstract Play fighting, the most common form of social play in mammals, is a fertile field to investigate the use of visual signals in animals’ communication systems. Visual signals can be exclusively emitted during play (e.g. play faces, PF, context-dependent signals), or they can be released under several behavioural domains (e.g. lip-smacking, LS, context-independent signals). Rapid facial mimicry (RFM) is the involuntary rapid facial congruent response produced after perceiving others’ facial expressions. RFM leads to behavioural and emotional synchronisation that often translates into the most balanced and longest playful interactions. Here, we investigate the role of playful communicative signals in geladas (Theropithecus gelada). We analysed the role of PF and LS produced by wild immature geladas during play fighting. We found that PFs, but not LS, were particularly frequent during the riskiest interactions such as those including individuals from different groups. Furthermore, we found that RFM (PF→PF) was highest when playful offensive patterns were not biased towards one of the players and when the session was punctuated by LS. Under this perspective, the presence of context-independent signals such as LS may be useful in creating an affiliative mood that enhances communication and facilitates most cooperative interactions. Indeed, we found that sessions punctuated by the highest frequency of RFM and LS were also the longest ones. Whether the complementary use of PF and LS is strategically guided by the audience or is the result of the emotional arousal experienced by players remains to be investigated. Significance Statement Facial expressions and their rapid replication by an observer are fundamental communicative tools during social contacts in human and non-human animals. Play fighting is one of the most complex forms of social interactions that can easily lead to misunderstanding if not modulated through an accurate use of social signals. Wild immature geladas are able to manage their play sessions thus limiting the risk of aggressive escalation. While playing with unfamiliar subjects belonging to other groups, they make use of a high number of play faces. Moreover, geladas frequently replicate others’ play faces and emit facial expressions of positive intent (i.e. lip-smacking) when engaging in well-balanced long play sessions. In this perspective, this “playful facial chattering” creates an affiliative mood that enhances communication and facilitates most cooperative interactions.

Cortical neurons that are selectively sensitive to faces, parts of faces and particular facial expressions are concentrated in the banks and floor of the superior temporal sulcus in macaque monkeys. Their existence has prompted suggestions that it is damage to such a region in the human brain that leads to prosopagnosia: the inability to recognize faces or to discriminate between faces. This was tested by removing the face-cell area in a group of monkeys. The animals learned to discriminate between pictures of faces or inanimate objects, to select the odd face from a group, to inspect a face then select the matching face from a pair of faces after a variable delay, to discriminate between novel and familiar faces, and to identify specific faces. Removing the face-cell area produced no or little impairment which in the latter case was not specific for faces. In contrast, several prosopagnosic patients were impaired at several of these tasks. The animals were less able than before to discern the angle of regard in pictures of faces, suggesting that this area of the brain may be concerned with the perception of facial expression and bearing, which are important social signals in primates.


Perception ◽  
2016 ◽  
Vol 46 (5) ◽  
pp. 624-631 ◽  
Author(s):  
Andreas M. Baranowski ◽  
H. Hecht

Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed “Kuleshov effect.” In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants’ emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.


2009 ◽  
Vol 364 (1535) ◽  
pp. 3497-3504 ◽  
Author(s):  
Ursula Hess ◽  
Reginald B. Adams ◽  
Robert E. Kleck

Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.


2020 ◽  
Vol 8 ◽  
Author(s):  
Nour Mheidly ◽  
Mohamad Y. Fares ◽  
Hussein Zalzale ◽  
Jawad Fares

Interpersonal communication has been severely affected during the COVID-19 pandemic. Protective measures, such as social distancing and face masks, are essential to mitigate efforts against the virus, but pose challenges on daily face-to-face communication. Face masks, particularly, muffle sounds and cover facial expressions that ease comprehension during live communication. Here, we explore the role of facial expressions in communication and we highlight how the face mask can hinder interpersonal connection. In addition, we offer coping strategies and skills that can ease communication with face masks as we navigate the current and any future pandemic.


2021 ◽  
Author(s):  
Christian Kliesch ◽  
Eugenio Parise ◽  
Vincent. M. Reid ◽  
Stefanie Hoehl

Learning about actions requires children to identify the boundaries of an action and its units. Whereas some action units are easily identified, parents can support children’s action learning by adjusting the presentation and using social signals. However, currently little is understood regarding how children use these signals to learn actions.In the current study we investigate the possibility that communicative signals are a particularly suitable cue for segmenting events. We investigated this hypothesis by presenting 18-month-old children (N=60) with short action sequences consisting of toy animals either hopping or sliding across a board into a house, but interrupting this two-step sequence either(a) using an ostensive signals as a segmentation cue, (b) using a non-ostensive segmentation cue, and (c) without additional segmentation information between the actions.Marking the boundary using communicative signals increased children’s imitation of the less salient sliding action. Imitation of the hopping action remained unaffected. Crucially, marking the boundary of both actions using a non-communicative control condition did not increase imitation of either action. Communicative signals might be particularly suitable in segmenting non-salient actions that would otherwise be perceived as part of another action or as non-intentional. These results provide evidence of the importance of ostensive signals at event boundaries in scaffolding children’s learning.


Author(s):  
Kayley Birch-Hurst ◽  
Magdalena Rychlowska ◽  
Michael B. Lewis ◽  
Ross E. Vanderwert

AbstractPeople tend to automatically imitate others’ facial expressions of emotion. That reaction, termed “facial mimicry” has been linked to sensorimotor simulation—a process in which the observer’s brain recreates and mirrors the emotional experience of the other person, potentially enabling empathy and deep, motivated processing of social signals. However, the neural mechanisms that underlie sensorimotor simulation remain unclear. This study tests how interfering with facial mimicry by asking participants to hold a pen in their mouth influences the activity of the human mirror neuron system, indexed by the desynchronization of the EEG mu rhythm. This response arises from sensorimotor brain areas during observed and executed movements and has been linked with empathy. We recorded EEG during passive viewing of dynamic facial expressions of anger, fear, and happiness, as well as nonbiological moving objects. We examine mu desynchronization under conditions of free versus altered facial mimicry and show that desynchronization is present when adult participants can freely move but not when their facial movements are inhibited. Our findings highlight the importance of motor activity and facial expression in emotion communication. They also have important implications for behaviors that involve occupying or hiding the lower part of the face.


2018 ◽  
Author(s):  
Mariana R. Pereira ◽  
Tiago O. Paiva ◽  
Fernando Barbosa ◽  
Pedro R. Almeida ◽  
Eva C. Martins ◽  
...  

AbstractTypicality, or averageness, is one of the key features that influences face evaluation, but the role of this property in the perception of facial expressions of emotions is still not fully understood. Typical faces are usually considered more pleasant and trustworthy, and neuroimaging results suggest typicality modulates amygdala and fusiform activation, influencing face perception. At the same time, there is evidence that arousal is a key affective feature that modulates neural reactivity to emotional expressions. In this sense, it remains unclear whether the neural effects of typicality depend on altered perceptions of affect from facial expressions or if the effects of typicality and affect independently modulate face processing. The goal of this work was to dissociate the effects of typicality and affective properties, namely valence and arousal, in electrophysiological responses and self-reported ratings across several facial expressions of emotion. Two ERP components relevant for face processing were measured, the N170 and Vertex Positive Potential (VPP), complemented by subjective ratings of typicality, valence, and arousal, in a sample of 30 healthy young adults (21 female). The results point out to a modulation of the electrophysiological responses by arousal, regardless of the typicality or valence properties of the face. These findings suggest that previous findings of neural responses to typicality may be better explained by accounting for the subjective perception of arousal in facial expressions.


2013 ◽  
Vol 37 (2) ◽  
pp. 154-159 ◽  
Author(s):  
Silvia Rigato ◽  
Enrica Menon ◽  
Valentina Di Gangi ◽  
Nathalie George ◽  
Teresa Farroni

Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether such emotional signals may modulate the behavior in a following gaze context. A classic spatial cueing paradigm was used to assess whether different facial expressions may cause differential orienting response times and modulate the visual response to a peripheral target in adults and in 4-month-old infants. Results showed that both adults and infants oriented towards a peripheral target when a central face was gazing in the direction of the target location. However, in adults this effect occurred regardless of the facial expression displayed by the face. In contrast, in infants, the emotional facial expressions used, at least in the current study, did not facilitate the attention shift but tended to hold infants’ attention.


Sign in / Sign up

Export Citation Format

Share Document