nonverbal vocalizations
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 18)

H-INDEX

6
(FIVE YEARS 2)

2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Andrey Anikin ◽  
Katarzyna Pisanski ◽  
David Reby

When producing intimidating aggressive vocalizations, humans and other animals often extend their vocal tracts to lower their voice resonance frequencies (formants) and thus sound big. Is acoustic size exaggeration more effective when the vocal tract is extended before, or during, the vocalization, and how do listeners interpret within-call changes in apparent vocal tract length? We compared perceptual effects of static and dynamic formant scaling in aggressive human speech and nonverbal vocalizations. Acoustic manipulations corresponded to elongating or shortening the vocal tract either around (Experiment 1) or from (Experiment 2) its resting position. Gradual formant scaling that preserved average frequencies conveyed the impression of smaller size and greater aggression, regardless of the direction of change. Vocal tract shortening from the original length conveyed smaller size and less aggression, whereas vocal tract elongation conveyed larger size and more aggression, and these effects were stronger for static than for dynamic scaling. Listeners familiarized with the speaker's natural voice were less often ‘fooled’ by formant manipulations when judging speaker size, but paid more attention to formants when judging aggressive intent. Thus, within-call vocal tract scaling conveys emotion, but a better way to sound large and intimidating is to keep the vocal tract consistently extended.


Author(s):  
Roza G. Kamiloğlu ◽  
Disa A. Sauter

The voice is a prime channel of communication in humans and other animals. Voices convey many kinds of information, including physical characteristics like body size and sex, as well as providing cues to the vocalizing individual’s identity and emotional state. Vocalizations are produced by dynamic modifications of the physiological vocal production system. The source-filter theory explains how vocalizations are produced in two stages: (a) the production of a sound source in the larynx, and (b) the filtering of that sound by the vocal tract. This two-stage process largely applies to all primate vocalizations. However, there are some differences between the vocal production apparatus of humans as compared to nonhuman primates, such as the lower position of the larynx and lack of air sacs in humans. Thanks to our flexible vocal apparatus, humans can produce a range of different types of vocalizations, including spoken language, nonverbal vocalizations, whispering, and singing. A comprehensive understanding of vocal communication takes both production and perception of vocalizations into account. Internal processes are expressed in the form of specific acoustic patterns in the producer’s voice. In order to communicate information in vocalizations, those acoustic patterns must be acoustically registered by listeners via auditory perception mechanisms. Both production and perception of vocalizations are affected by psychobiological mechanisms as well as sociocultural factors. Furthermore, vocal production and perception can be impaired by a range of different disorders. Vocal production and hearing disorders, as well as mental disorders including autism spectrum disorder, depression, and schizophrenia, affect vocal communication.


2021 ◽  
Vol 17 (9) ◽  
Author(s):  
Katarzyna Pisanski ◽  
Agata Groyecka-Bernard ◽  
Piotr Sorokowski

Fundamental frequency ( f o ), perceived as voice pitch, is the most sexually dimorphic, perceptually salient and intensively studied voice parameter in human nonverbal communication. Thousands of studies have linked human f o to biological and social speaker traits and life outcomes, from reproductive to economic. Critically, researchers have used myriad speech stimuli to measure f o and infer its functional relevance, from individual vowels to longer bouts of spontaneous speech. Here, we acoustically analysed f o in nearly 1000 affectively neutral speech utterances (vowels, words, counting, greetings, read paragraphs and free spontaneous speech) produced by the same 154 men and women, aged 18–67, with two aims: first, to test the methodological validity of comparing f o measures from diverse speech stimuli, and second, to test the prediction that the vast inter-individual differences in habitual f o found between same-sex adults are preserved across speech types. Indeed, despite differences in linguistic content, duration, scripted or spontan­­eous production and within-individual variability, we show that 42–81% of inter-individual differences in f o can be explained between any two speech types. Beyond methodological implications, together with recent evidence that inter-individual differences in f o are remarkably stable across the lifespan and generalize to emotional speech and nonverbal vocalizations, our results further substantiate voice pitch as a robust and reliable biomarker in human communication.


Author(s):  
Roza G. Kamiloğlu ◽  
George Boateng ◽  
Alisa Balabanova ◽  
Chuting Cao ◽  
Disa A. Sauter

AbstractThe human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners’ (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody.


2021 ◽  
Vol 288 (1954) ◽  
pp. 20210872
Author(s):  
Andrey Anikin ◽  
Katarzyna Pisanski ◽  
Mathilde Massenet ◽  
David Reby

A lion's roar, a dog's bark, an angry yell in a pub brawl: what do these vocalizations have in common? They all sound harsh due to nonlinear vocal phenomena (NLP)—deviations from regular voice production, hypothesized to lower perceived voice pitch and thereby exaggerate the apparent body size of the vocalizer. To test this yet uncorroborated hypothesis, we synthesized human nonverbal vocalizations, such as roars, groans and screams, with and without NLP (amplitude modulation, subharmonics and chaos). We then measured their effects on nearly 700 listeners' perceptions of three psychoacoustic (pitch, timbre, roughness) and three ecological (body size, formidability, aggression) characteristics. In an explicit rating task, all NLP lowered perceived voice pitch, increased voice darkness and roughness, and caused vocalizers to sound larger, more formidable and more aggressive. Key results were replicated in an implicit associations test, suggesting that the ‘harsh is large’ bias will arise in ecologically relevant confrontational contexts that involve a rapid, and largely implicit, evaluation of the opponent's size. In sum, nonlinearities in human vocalizations can flexibly communicate both formidability and intention to attack, suggesting they are not a mere byproduct of loud vocalizing, but rather an informative acoustic signal well suited for intimidating potential opponents.


Author(s):  
Gonçalo Cosme ◽  
Vânia Tavares ◽  
Guilherme Nobre ◽  
César Lima ◽  
Rui Sá ◽  
...  

AbstractCross-cultural studies of emotion recognition in nonverbal vocalizations not only support the universality hypothesis for its innate features, but also an in-group advantage for culture-dependent features. Nevertheless, in such studies, differences in socio-economic-educational status have not always been accounted for, with idiomatic translation of emotional concepts being a limitation, and the underlying psychophysiological mechanisms still un-researched. We set out to investigate whether native residents from Guinea-Bissau (West African culture) and Portugal (Western European culture)—matched for socio-economic-educational status, sex and language—varied in behavioural and autonomic system response during emotion recognition of nonverbal vocalizations from Portuguese individuals. Overall, Guinea–Bissauans (as out-group) responded significantly less accurately (corrected p < .05), slower, and showed a trend for higher concomitant skin conductance, compared to Portuguese (as in-group)—findings which may indicate a higher cognitive effort stemming from higher difficulty in discerning emotions from another culture. Specifically, accuracy differences were particularly found for pleasure, amusement, and anger, rather than for sadness, relief or fear. Nevertheless, both cultures recognized all emotions above-chance level. The perceived authenticity, measured for the first time in nonverbal cross-cultural research, in the same vocalizations, retrieved no difference between cultures in accuracy, but still a slower response from the out-group. Lastly, we provide—to our knowledge—a first account of how skin conductance response varies between nonverbally vocalized emotions, with significant differences (p < .05). In sum, we provide behavioural and psychophysiological data, demographically and language-matched, that supports cultural and emotion effects on vocal emotion recognition and perceived authenticity, as well as the universality hypothesis.


Author(s):  
Helena S. Moreira ◽  
Ana Sofia Costa ◽  
Álvaro Machado ◽  
São Luís Castro ◽  
Selene G. Vicente ◽  
...  

Abstract Objective: The ability to recognize others’ emotions is a central aspect of socioemotional functioning. Emotion recognition impairments are well documented in Alzheimer’s disease and other dementias, but it is less understood whether they are also present in mild cognitive impairment (MCI). Results on facial emotion recognition are mixed, and crucially, it remains unclear whether the potential impairments are specific to faces or extend across sensory modalities, Method: In the current study, 32 MCI patients and 33 cognitively intact controls completed a comprehensive neuropsychological assessment and two forced-choice emotion recognition tasks, including visual and auditory stimuli. The emotion recognition tasks required participants to categorize emotions in facial expressions and in nonverbal vocalizations (e.g., laughter, crying) expressing neutrality, anger, disgust, fear, happiness, pleasure, surprise, or sadness. Results: MCI patients performed worse than controls for both facial expressions and vocalizations. The effect was large, similar across tasks and individual emotions, and it was not explained by sensory losses or affective symptomatology. Emotion recognition impairments were more pronounced among patients with lower global cognitive performance, but they did not correlate with the ability to perform activities of daily living. Conclusions: These findings indicate that MCI is associated with emotion recognition difficulties and that such difficulties extend beyond vision, plausibly reflecting a failure at supramodal levels of emotional processing. This highlights the importance of considering emotion recognition abilities as part of standard neuropsychological testing in MCI, and as a target of interventions aimed at improving social cognition in these patients.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gonçalo Cosme ◽  
Pedro J. Rosa ◽  
César F. Lima ◽  
Vânia Tavares ◽  
Sophie Scott ◽  
...  

AbstractThe ability to infer the authenticity of other’s emotional expressions is a social cognitive process taking place in all human interactions. Although the neurocognitive correlates of authenticity recognition have been probed, its potential recruitment of the peripheral autonomic nervous system is not known. In this work, we asked participants to rate the authenticity of authentic and acted laughs and cries, while simultaneously recording their pupil size, taken as proxy of cognitive effort and arousal. We report, for the first time, that acted laughs elicited higher pupil dilation than authentic ones and, reversely, authentic cries elicited higher pupil dilation than acted ones. We tentatively suggest the lack of authenticity in others’ laughs elicits increased pupil dilation through demanding higher cognitive effort; and that, reversely, authenticity in cries increases pupil dilation, through eliciting higher emotional arousal. We also show authentic vocalizations and laughs (i.e. main effects of authenticity and emotion) to be perceived as more authentic, arousing and contagious than acted vocalizations and cries, respectively. In conclusion, we show new evidence that the recognition of emotional authenticity can be manifested at the level of the autonomic nervous system in humans. Notwithstanding, given its novelty, further independent research is warranted to ascertain its psychological meaning.


2021 ◽  
Author(s):  
Roza Gizem Kamiloglu ◽  
Disa Sauter

When we hear another person laugh or scream, can we tell the kind of situation they are in – whether they are playing or fighting? If nonverbal expressions vary systematically across behavioral contexts, perceivers might be sensitive to these mappings and consequently be able to tell the contexts from others’ vocalizations. Here, we test the prediction that listeners can infer production contexts from vocalizations by examining listeners’ ability to match spontaneous nonverbal vocalizations to the behavioral contexts in which they were produced. In a preregistered experiment, listeners (N = 3120) matched 200 nonverbal vocalizations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalizations to nine of the behavioral contexts. We also found that listeners’ performance was more accurate for vocalizations produced in negative as compared to positive contexts. These results indicate that perceivers can accurately infer contextual information from nonverbal vocalizations, demonstrating that listeners are sensitive to systematic associations between vocalizations and behavioral contexts.


2020 ◽  
Author(s):  
Cesar Lima ◽  
Patricia Arriaga ◽  
Andrey Anikin ◽  
Ana Rita Pires ◽  
Sofia Frade ◽  
...  

The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear, however, whether and how similar mechanisms extend to audition. To address this issue, we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect a genuine or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Genuine laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to performance in a subsequent authenticity detection task. Stronger responses in the orbicularis predicted improved recognition of genuine laughs. Stronger responses in the corrugator, a muscle associated with negative affect, predicted improved recognition of posed laughs. Moreover, genuine laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, though. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.


Sign in / Sign up

Export Citation Format

Share Document