scholarly journals Emotional authenticity modulates affective and social trait inferences from voices

Author(s):  
Ana P. Pinheiro ◽  
Andrey Anikin ◽  
Tatiana Conde ◽  
João Sarzedas ◽  
Sinead Chen ◽  
...  

The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.

2021 ◽  
Author(s):  
Ana P. Pinheiro ◽  
Andrey Anikin ◽  
Tatiana Conde ◽  
João Sarzedas ◽  
Sinead Chen ◽  
...  

The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional, and rated them on authenticity, valence, arousal, trustworthiness, and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability, and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter.


Author(s):  
Susan M. Hughes ◽  
David A. Puts

The human voice is dynamic, and people modulate their voices across different social interactions. This article presents a review of the literature examining natural vocal modulation in social contexts relevant to human mating and intrasexual competition. Altering acoustic parameters during speech, particularly pitch, in response to mating and competitive contexts can influence social perception and indicate certain qualities of the speaker. For instance, a lowered voice pitch is often used to exert dominance, display status and compete with rivals. Changes in voice can also serve as a salient medium for signalling a person's attraction to another, and there is evidence to support the notion that attraction and/or romantic interest can be distinguished through vocal tones alone. Individuals can purposely change their vocal behaviour in attempt to sound more attractive and to facilitate courtship success. Several findings also point to the effectiveness of vocal change as a mechanism for communicating relationship status. As future studies continue to explore vocal modulation in the arena of human mating, we will gain a better understanding of how and why vocal modulation varies across social contexts and its impact on receiver psychology. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.


Author(s):  
D. Bedoya ◽  
P. Arias ◽  
L. Rachman ◽  
M. Liuni ◽  
C. Canonne ◽  
...  

A wealth of theoretical and empirical arguments have suggested that music triggers emotional responses by resembling the inflections of expressive vocalizations, but have done so using low-level acoustic parameters (pitch, loudness, speed) that, in fact, may not be processed by the listener in reference to human voice. Here, we take the opportunity of the recent availability of computational models that allow the simulation of three specifically vocal emotional behaviours: smiling, vocal tremor and vocal roughness. When applied to musical material, we find that these three acoustic manipulations trigger emotional perceptions that are remarkably similar to those observed on speech and scream sounds, and identical across musician and non-musician listeners. Strikingly, this not only applied to singing voice with and without musical background, but also to purely instrumental material. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.


Author(s):  
Sophie K. Scott

The networks of cortical and subcortical fields that contribute to speech production have benefitted from many years of detailed study, and have been used as a framework for human volitional vocal production more generally. In this article, I will argue that we need to consider speech production as an expression of the human voice in a more general sense. I will also argue that the neural control of the voice can and should be considered to be a flexible system, into which more right hemispheric networks are differentially recruited, based on the factors that are modulating vocal production. I will explore how this flexible network is recruited to express aspects of non-verbal information in the voice, such as identity and social traits. Finally, I will argue that we need to widen out the kinds of vocal behaviours that we explore, if we want to understand the neural underpinnings of the true range of sound-making capabilities of the human voice. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.


2018 ◽  
Vol 285 (1893) ◽  
pp. 20181634 ◽  
Author(s):  
Katarzyna Pisanski ◽  
Anna Oleszkiewicz ◽  
Justyna Plachetka ◽  
Marzena Gmiterek ◽  
David Reby

Inter-individual differences in human fundamental frequency ( F 0, perceived as voice pitch) predict mate quality and reproductive success, and affect listeners' social attributions. Although humans can readily and volitionally manipulate their vocal apparatus and resultant voice pitch, for instance, in the production of speech sounds and singing, little is known about whether humans exploit this capacity to adjust the non-verbal dimensions of their voices during social (including sexual) interactions. Here, we recorded full-length conversations of 30 adult men and women taking part in real speed-dating events and tested whether their voice pitch (mean, range and variability) changed with their personal mate choice preferences and the overall desirability of each dating partner. Within-individual analyses indicated that men lowered the minimum pitch of their voices when interacting with women who were overall highly desired by other men. Men also lowered their mean voice pitch on dates with women they selected as potential mates, particularly those who indicated a mutual preference (matches). Interestingly, although women spoke with a higher and more variable voice pitch towards men they selected as potential mates, women lowered both voice pitch parameters towards men who were most desired by other women and whom they also personally preferred. Between-individual analyses indicated that men in turn preferred women with lower-pitched voices, wherein women's minimum voice pitch explained up to 55% of the variance in men's mate preferences. These results, derived in an ecologically valid setting, show that individual- and group-level mate preferences can interact to affect vocal behaviour, and support the hypothesis that human voice modulation functions in non-verbal communication to elicit favourable judgements and behaviours from others, including potential mates.


2014 ◽  
Vol 281 (1785) ◽  
pp. 20133201 ◽  
Author(s):  
Federico Rossano ◽  
Marie Nitzschner ◽  
Michael Tomasello

Domestic dogs are particularly skilled at using human visual signals to locate hidden food. This is, to our knowledge, the first series of studies that investigates the ability of dogs to use only auditory communicative acts to locate hidden food. In a first study, from behind a barrier, a human expressed excitement towards a baited box on either the right or left side, while sitting closer to the unbaited box. Dogs were successful in following the human's voice direction and locating the food. In the two following control studies, we excluded the possibility that dogs could locate the box containing food just by relying on smell, and we showed that they would interpret a human's voice direction in a referential manner only when they could locate a possible referent (i.e. one of the boxes) in the environment. Finally, in a fourth study, we tested 8–14-week-old puppies in the main experimental test and found that those with a reasonable amount of human experience performed overall even better than the adult dogs. These results suggest that domestic dogs’ skills in comprehending human communication are not based on visual cues alone, but are instead multi-modal and highly flexible. Moreover, the similarity between young and adult dogs’ performances has important implications for the domestication hypothesis.


2022 ◽  
Vol 15 ◽  
Author(s):  
Sandra Racionero-Plaza ◽  
Lídia Puigvert ◽  
Marta Soler-Gallart ◽  
Ramon Flecha

Neuroscience has well evidenced that the environment and, more specifically, social experience, shapes and transforms the architecture and functioning of the brain and even its genes. However, in order to understand how that happens, which types of social interactions lead to different results in brain and behavior, neurosciences require the social sciences. The social sciences have already made important contributions to neuroscience, among which the behaviorist explanations of human learning are prominent and acknowledged by the most well-known neuroscientists today. Yet neurosciences require more inputs from the social sciences to make meaning of new findings about the brain that deal with some of the most profound human questions. However, when we look at the scientific and theoretical production throughout the history of social sciences, a great fragmentation can be observed, having little interdisciplinarity and little connection between what authors in the different disciplines are contributing. This can be well seen in the field of communicative interaction. Nonetheless, this fragmentation has been overcome via the theory of communicative acts, which integrates knowledge from language and interaction theories but goes one step further in incorporating other aspects of human communication and the role of context. The theory of communicative acts is very informative to neuroscience, and a central contribution in socioneuroscience that makes possible deepening of our understanding of most pressing social problems, such as free and coerced sexual-affective desire, and achieving social and political impact toward their solution. This manuscript shows that socioneuroscience is an interdisciplinary frontier in which the dialogue between all social sciences and all natural sciences opens up an opportunity to integrate different levels of analysis in several sciences to ultimately achieve social impact regarding the most urgent human problems.


2019 ◽  
Vol 8 (4) ◽  
pp. 7447-7450

The human voice construction is a complex biological mechanism capable of Changing pitch and volume. Some Internal or External factors frequently damage the vocal cords and change quality of voice or do some alteration in the voice modulation. The effects are reflected in expression of speech and understanding of information said by the person. So it is important to examine problem at early stages of voice change and overcome from this problem. ML play a major role in identifying whether voice is pathological or normal in nature. Voice features are extracted by Implementing Mel-frequency Cepstral Coefficients (MFCC) method, and examined on the Convolutional Neural Network (CNN) to identify the category of voice.


Author(s):  
Maximilian Schmitt ◽  
Björn W. Schuller

Machines are able to obtain rich information from the human voice with a certain reliability. This can comprise information about the affective or mental state, but also traits of the speaker. This chapter introduces all the different technical steps needed in such intelligent voice analysis. Typically, the first step involves extraction of meaningful acoustic features, which are then transformed into a suitable representation. The acoustic information can be augmented by linguistic features originating from a speech-to-text transcription. The features are finally decoded on different levels using machine-learning methods. Recently, ‘deep learning’ has received growing interest, where deep artificial neural networks are used to decode the information. From this, end-to-end learning has evolved, where even the feature extraction step is learned seamlessly, through to the decoding step, mimicking the recognition process in the human brain. Subsequent to the description of according and further frequently encountered methods, the chapter concludes with some future perspective.


2020 ◽  
Vol 30 (11) ◽  
pp. 6004-6020
Author(s):  
Stella Guldner ◽  
Frauke Nees ◽  
Carolyn McGettigan

Abstract Voice modulation is important when navigating social interactions—tone of voice in a business negotiation is very different from that used to comfort an upset child. While voluntary vocal behavior relies on a cortical vocomotor network, social voice modulation may require additional social cognitive processing. Using functional magnetic resonance imaging, we investigated the neural basis for social vocal control and whether it involves an interplay of vocal control and social processing networks. Twenty-four healthy adult participants modulated their voice to express social traits along the dimensions of the social trait space (affiliation and competence) or to express body size (control for vocal flexibility). Naïve listener ratings showed that vocal modulations were effective in evoking social trait ratings along the two primary dimensions of the social trait space. Whereas basic vocal modulation engaged the vocomotor network, social voice modulation specifically engaged social processing regions including the medial prefrontal cortex, superior temporal sulcus, and precuneus. Moreover, these regions showed task-relevant modulations in functional connectivity to the left inferior frontal gyrus, a core vocomotor control network area. These findings highlight the impact of the integration of vocal motor control and social information processing for socially meaningful voice modulation.


Sign in / Sign up

Export Citation Format

Share Document