communicative signals
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 18)

H-INDEX

17
(FIVE YEARS 2)

2022 ◽  
Vol 76 (1) ◽  
Author(s):  
Alessandro Gallo ◽  
Anna Zanoli ◽  
Marta Caselli ◽  
Ivan Norscia ◽  
Elisabetta Palagi

Abstract Play fighting, the most common form of social play in mammals, is a fertile field to investigate the use of visual signals in animals’ communication systems. Visual signals can be exclusively emitted during play (e.g. play faces, PF, context-dependent signals), or they can be released under several behavioural domains (e.g. lip-smacking, LS, context-independent signals). Rapid facial mimicry (RFM) is the involuntary rapid facial congruent response produced after perceiving others’ facial expressions. RFM leads to behavioural and emotional synchronisation that often translates into the most balanced and longest playful interactions. Here, we investigate the role of playful communicative signals in geladas (Theropithecus gelada). We analysed the role of PF and LS produced by wild immature geladas during play fighting. We found that PFs, but not LS, were particularly frequent during the riskiest interactions such as those including individuals from different groups. Furthermore, we found that RFM (PF→PF) was highest when playful offensive patterns were not biased towards one of the players and when the session was punctuated by LS. Under this perspective, the presence of context-independent signals such as LS may be useful in creating an affiliative mood that enhances communication and facilitates most cooperative interactions. Indeed, we found that sessions punctuated by the highest frequency of RFM and LS were also the longest ones. Whether the complementary use of PF and LS is strategically guided by the audience or is the result of the emotional arousal experienced by players remains to be investigated. Significance Statement Facial expressions and their rapid replication by an observer are fundamental communicative tools during social contacts in human and non-human animals. Play fighting is one of the most complex forms of social interactions that can easily lead to misunderstanding if not modulated through an accurate use of social signals. Wild immature geladas are able to manage their play sessions thus limiting the risk of aggressive escalation. While playing with unfamiliar subjects belonging to other groups, they make use of a high number of play faces. Moreover, geladas frequently replicate others’ play faces and emit facial expressions of positive intent (i.e. lip-smacking) when engaging in well-balanced long play sessions. In this perspective, this “playful facial chattering” creates an affiliative mood that enhances communication and facilitates most cooperative interactions.


2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Nina Lytovchenko ◽  
◽  
Yaroslava Andrieieva ◽  

The article deals with the analysis of the personality in the modern informational world problem. The author based the theoretical analysis of this problem on the specifics of the current stage in psychological studies of mass communication. The author assumes that social constructivism, as a postmodernist approach and the theoretical basis of modern studies in mass communication, considers the way mass communication participants construct their idea of the world and its peculiarities. The concept of the “mosaic-resonance effect” is interpreted as the main feature of the nowadays mass media’s messages. The author analyzes the discourse as an instrument used to examine the construction of one's idea of the world during mass communication. The main characteristics of the two models of mass communication: the discourse model by J.Fiske and the constructivist model by W.Gamson, are reviewed in detail. The empirical study outcomes have shown the perspectives of further studies of the mass media's psychological influence on the peculiarities of the person’s way to construct the image of the world and one’s means of interpreting communicative signals to be prospective. Our empirical study demonstrated that subjects referred to as TV-dependent (those who tend to spend a lot of time watching the TV content) are characterized by non-stable emotions, less considerate in linking and analyzing the details of the given information, its’ non-critical perception. TV-dependent respondents mostly perceive the TV data non-adequately, paying attention to emotionally meaningful pieces of information, and tending to reconstruct their image of the event, so that it might be an illustration or general background of the TV content piece, or the information of the lower levels of semantic structure in terms of A.A.Bodalyov.


2021 ◽  
Author(s):  
Xianyang Gan ◽  
Xinqi Zhou ◽  
Jialin Li ◽  
Guojuan Jiao ◽  
Xi Jiang ◽  
...  

ABSTRACTDisgust represents a multifaceted defensive-avoidance response. On the behavioral level, the response includes withdrawal and a disgust-specific facial expression. While both serve the avoidance of pathogens the latter additionally transmits social-communicative information. Given that common and distinct brain representation of the primary defensive-avoidance response (core disgust) and encoding of the social-communicative signal (social disgust) remain debated we employed neuroimaging meta-analyses to (1) determine brain systems generally engaged in disgust processing, and (2) segregate common and distinct brain systems for core and social disgust. Disgust processing, in general, engaged a bilateral network encompassing the insula, amygdala, occipital and prefrontal regions. Core disgust evoked stronger reactivity in left-lateralized threat detection and defensive response network including amygdala, occipital and frontal regions while social disgust engaged a right-lateralized superior temporal-frontal network engaged in social cognition. Anterior insula, inferior frontal and fusiform regions were commonly engaged during core and social disgust suggesting a common neural basis. We demonstrate a common and separable neural basis of primary disgust responses and encoding of associated social-communicative signals.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0255241
Author(s):  
Kirsty E. Graham ◽  
Joanna C. Buryn-Weitzel ◽  
Nicole J. Lahiff ◽  
Claudia Wilke ◽  
Katie E. Slocombe

Joint attention, or sharing attention with another individual about an object or event, is a critical behaviour that emerges in pre-linguistic infants and predicts later language abilities. Given its importance, it is perhaps surprising that there is no consensus on how to measure joint attention in prelinguistic infants. A rigorous definition proposed by Siposova & Carpenter (2019) requires the infant and partner to gaze alternate between an object and each other (coordination of attention) and exchange communicative signals (explicit acknowledgement of jointly sharing attention). However, Hobson and Hobson (2007) proposed that the quality of gaze between individuals is, in itself, a sufficient communicative signal that demonstrates sharing of attention. They proposed that observers can reliably distinguish “sharing”, “checking”, and “orienting” looks, but the empirical basis for this claim is limited as their study focussed on two raters examining looks from 11-year-old children. Here, we analysed categorisations made by 32 naïve raters of 60 infant looks to their mothers, to examine whether they could be reliably distinguished according to Hobson and Hobson’s definitions. Raters had overall low agreement and only in 3 out of 26 cases did a significant majority of the raters agree with the judgement of the mother who had received the look. For the looks that raters did agree on at above chance levels, look duration and the overall communication rate of the mother were identified as cues that raters may have relied upon. In our experiment, naïve third party observers could not reliably determine the type of look infants gave to their mothers, which indicates that subjective judgements of types of look should not be used to identify mutual awareness of sharing attention in infants. Instead, we advocate the use of objective behaviour measurement to infer that interactants know they are ‘jointly’ attending to an object or event, and believe this will be a crucial step in understanding the ontogenetic and evolutionary origins of joint attention.


2021 ◽  
Author(s):  
Andrea Marotta ◽  
Belén Aranda-Martín ◽  
Marco De Cono ◽  
María Ángeles Ballesteros Duperón ◽  
Maria Casagrande ◽  
...  

We investigated whether individuals with high levels of autistic traits integrate relevant communicative signals, such as facial expression, when decoding eye-gaze direction. Students with high vs. low scores on the Autism Spectrum Quotient (AQ) performed a task in which they responded to the eyes’ direction of faces, presented on the left or the right side of the screen, portraying different emotional expressions. In both groups, the identification of gaze direction was faster when the eyes were directed towards the center of the scene. However, only in the low AQ group, this effect was larger for happy faces than for neutral faces or faces showing other emotional expressions. High AQ participants were not affected by emotional expressions. These results suggested that individuals with more autistic traits may do not integrate multiple communicative signals based on their emotional value.


2021 ◽  
Author(s):  
Mathilda Froesel ◽  
Maëva Gacoin ◽  
Simon Clavagnier ◽  
Marc Hauser ◽  
Quentin Goudard ◽  
...  

Abstract Social interactions rely on the interpretation of semantic and emotional information, often from multiple sensory modalities. In primates, both audition and vision serve the interpretation of communicative signals. Autistic individuals present deficits in both social communication and audio-visual integration. At present, the neural mechanisms subserving the interpretation of complex audio-visual social events are unknown. Based on heart rate estimates and functional neuroimaging, we show that macaque monkeys associate affiliative facial expressions or social scenes with corresponding affiliative vocalizations, aggressive expressions or scenes with corresponding aggressive vocalizations and escape visual scenes with scream vocalizations, while suppressing vocalizations that are incongruent with the visual context. This process is subserved by two distinct functional networks, homologous to the human emotional and attentional networks activated during the processing of visual social information. These networks are thus critical for the construction of social meaning representation, and provide grounds for the audio-visual deficits observed in autism.One-sentence summary Macaques extract social meaning from visual and auditory input recruiting face and voice patches and a broader emotional and attentional network.


Author(s):  
Hedwig A. van der Meer ◽  
Irina Sheftel-Simanova ◽  
Cornelis C. Kan ◽  
James P. Trujillo

AbstractThe actions and feelings questionnaire (AFQ) provides a short, self-report measure of how well someone uses and understands visual communicative signals such as gestures. The objective of this study was to translate and cross-culturally adapt the AFQ into Dutch (AFQ-NL) and validate this new version in neurotypical and autistic populations. Translation and adaptation of the AFQ consisted of forward translation, synthesis, back translation, and expert review. In order to validate the AFQ-NL, we assessed convergent and divergent validity. We additionally assessed internal consistency using Cronbach’s alpha. Validation and reliability outcomes were all satisfactory. The AFQ-NL is a valid adaptation that can be used for both autistic and neurotypical populations in the Netherlands.


2021 ◽  
Author(s):  
Mathilda Froesel ◽  
Maeva Gacoin ◽  
Simon Clavagnier ◽  
Marc Hauser ◽  
Quentin Goudard ◽  
...  

Social interactions rely on the ability to interpret semantic and emotional information, often from multiple sensory modalities. In human and nonhuman primates, both the auditory and visual modalities are used to generate and interpret communicative signals. In individuals with autism, not only are there deficits in social communication, but in the integration of audio-visual information. At present, we know little about the neural mechanisms that subserve the interpretation of complex social events, including the audio-visual integration that is often required with accompanying communicative signals. Based on heart rate estimates and fMRI in two macaque monkeys (Macaca mulatta), we show that individuals systematically associate affiliative facial expressions or social scenes with corresponding affiliative vocalizations, aggressive facial expressions or social scenes with corresponding aggressive vocalizations and escape visual scenes with scream vocalizations. In contrast, vocalizations that are incompatible with the visual information are fully suppressed, suggesting top-down regulation over the processing of sensory input. The process of binding audio-visual semantic and contextual information relies on a core functional network involving the superior temporal sulcus (STS) and lateral sulcus (LS). Peak activations in both sulci co-localize with face or voice patches that have been previously described. While all of these regions of interest (ROIs) respond to both auditory and visual information, LS ROIs have a preference for auditory and audio-visual congruent stimuli while STS ROIs equally respond to auditory, visual and audio-visual congruent stimuli. To further specify the cortical network involved in the control of this semantic association, we performed a whole brain gPPI functional connectivity analysis on the LS and STS cumulated ROIs. This gPPI analysis highlights a functional network connected to the LS and STS, involving the anterior cingulate cortex (ACC), area 46 in the dorsolateral prefrontal cortex (DLPFC), the orbitofrontal cortex (OFC), the intraparietal sulcus (IPS), the insular cortex and subcortically, the amygdala and the hippocampus. Comparing human and macaque results, we propose that the integration of audio-visual information for congruent, meaningful social events involves homologous neural circuitry, specifically, an emotional network composed of the STS, LS, ACC, OFC, and limbic areas, including the amygdala, and an attentional network including the STS, LS, IPS and DLPFC. As such, these networks are critical to the amodal representation of social meaning, thereby providing an explanation for some of deficits observed in autism.


2021 ◽  
Vol 288 (1947) ◽  
Author(s):  
Cristina Romero-Diaz ◽  
Jake A. Pruett ◽  
Stephanie M. Campos ◽  
Alison G. Ossip-Drahos ◽  
J. Jaime Zúñiga-Vega ◽  
...  

Behavioural responses to communicative signals combine input from multiple sensory modalities and signal compensation theory predicts that evolutionary shifts in one sensory modality could impact the response to signals in other sensory modalities. Here, we conducted two types of field experiments with 11 species spread across the lizard genus Sceloporus to test the hypothesis that the loss of visual signal elements affects behavioural responses to a chemical signal (conspecific scents) or to a predominantly visual signal (a conspecific lizard), both of which are used in intraspecific communication. We found that three species that have independently lost a visual signal trait, a colourful belly patch, responded to conspecific scents with increased chemosensory behaviour compared to a chemical control, while species with the belly patch did not. However, most species, with and without the belly patch, responded to live conspecifics with increased visual displays of similar magnitude. While aggressive responses to visual stimuli are taxonomically widespread in Sceloporus , our results suggest that increased chemosensory response behaviour is linked to colour patch loss. Thus, interactions across sensory modalities could constrain the evolution of complex signalling phenotypes, thereby influencing signal diversity.


2021 ◽  
Vol 11 ◽  
Author(s):  
Mircea Zloteanu ◽  
Eva G. Krumhuber

People dedicate significant attention to others’ facial expressions and to deciphering their meaning. Hence, knowing whether such expressions are genuine or deliberate is important. Early research proposed that authenticity could be discerned based on reliable facial muscle activations unique to genuine emotional experiences that are impossible to produce voluntarily. With an increasing body of research, such claims may no longer hold up to empirical scrutiny. In this article, expression authenticity is considered within the context of senders’ ability to produce convincing facial displays that resemble genuine affect and human decoders’ judgments of expression authenticity. This includes a discussion of spontaneous vs. posed expressions, as well as appearance- vs. elicitation-based approaches for defining emotion recognition accuracy. We further expand on the functional role of facial displays as neurophysiological states and communicative signals, thereby drawing upon the encoding-decoding and affect-induction perspectives of emotion expressions. Theoretical and methodological issues are addressed with the aim to instigate greater conceptual and operational clarity in future investigations of expression authenticity.


Sign in / Sign up

Export Citation Format

Share Document