scholarly journals Vocal modulation in human mating and competition

Author(s):  
Susan M. Hughes ◽  
David A. Puts

The human voice is dynamic, and people modulate their voices across different social interactions. This article presents a review of the literature examining natural vocal modulation in social contexts relevant to human mating and intrasexual competition. Altering acoustic parameters during speech, particularly pitch, in response to mating and competitive contexts can influence social perception and indicate certain qualities of the speaker. For instance, a lowered voice pitch is often used to exert dominance, display status and compete with rivals. Changes in voice can also serve as a salient medium for signalling a person's attraction to another, and there is evidence to support the notion that attraction and/or romantic interest can be distinguished through vocal tones alone. Individuals can purposely change their vocal behaviour in attempt to sound more attractive and to facilitate courtship success. Several findings also point to the effectiveness of vocal change as a mechanism for communicating relationship status. As future studies continue to explore vocal modulation in the arena of human mating, we will gain a better understanding of how and why vocal modulation varies across social contexts and its impact on receiver psychology. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.

2018 ◽  
Vol 285 (1893) ◽  
pp. 20181634 ◽  
Author(s):  
Katarzyna Pisanski ◽  
Anna Oleszkiewicz ◽  
Justyna Plachetka ◽  
Marzena Gmiterek ◽  
David Reby

Inter-individual differences in human fundamental frequency ( F 0, perceived as voice pitch) predict mate quality and reproductive success, and affect listeners' social attributions. Although humans can readily and volitionally manipulate their vocal apparatus and resultant voice pitch, for instance, in the production of speech sounds and singing, little is known about whether humans exploit this capacity to adjust the non-verbal dimensions of their voices during social (including sexual) interactions. Here, we recorded full-length conversations of 30 adult men and women taking part in real speed-dating events and tested whether their voice pitch (mean, range and variability) changed with their personal mate choice preferences and the overall desirability of each dating partner. Within-individual analyses indicated that men lowered the minimum pitch of their voices when interacting with women who were overall highly desired by other men. Men also lowered their mean voice pitch on dates with women they selected as potential mates, particularly those who indicated a mutual preference (matches). Interestingly, although women spoke with a higher and more variable voice pitch towards men they selected as potential mates, women lowered both voice pitch parameters towards men who were most desired by other women and whom they also personally preferred. Between-individual analyses indicated that men in turn preferred women with lower-pitched voices, wherein women's minimum voice pitch explained up to 55% of the variance in men's mate preferences. These results, derived in an ecologically valid setting, show that individual- and group-level mate preferences can interact to affect vocal behaviour, and support the hypothesis that human voice modulation functions in non-verbal communication to elicit favourable judgements and behaviours from others, including potential mates.


Author(s):  
D. Bedoya ◽  
P. Arias ◽  
L. Rachman ◽  
M. Liuni ◽  
C. Canonne ◽  
...  

A wealth of theoretical and empirical arguments have suggested that music triggers emotional responses by resembling the inflections of expressive vocalizations, but have done so using low-level acoustic parameters (pitch, loudness, speed) that, in fact, may not be processed by the listener in reference to human voice. Here, we take the opportunity of the recent availability of computational models that allow the simulation of three specifically vocal emotional behaviours: smiling, vocal tremor and vocal roughness. When applied to musical material, we find that these three acoustic manipulations trigger emotional perceptions that are remarkably similar to those observed on speech and scream sounds, and identical across musician and non-musician listeners. Strikingly, this not only applied to singing voice with and without musical background, but also to purely instrumental material. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.


Author(s):  
Sophie K. Scott

The networks of cortical and subcortical fields that contribute to speech production have benefitted from many years of detailed study, and have been used as a framework for human volitional vocal production more generally. In this article, I will argue that we need to consider speech production as an expression of the human voice in a more general sense. I will also argue that the neural control of the voice can and should be considered to be a flexible system, into which more right hemispheric networks are differentially recruited, based on the factors that are modulating vocal production. I will explore how this flexible network is recruited to express aspects of non-verbal information in the voice, such as identity and social traits. Finally, I will argue that we need to widen out the kinds of vocal behaviours that we explore, if we want to understand the neural underpinnings of the true range of sound-making capabilities of the human voice. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.


Author(s):  
Ana P. Pinheiro ◽  
Andrey Anikin ◽  
Tatiana Conde ◽  
João Sarzedas ◽  
Sinead Chen ◽  
...  

The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.


2020 ◽  
Author(s):  
Jonathan Gordils ◽  
Jeremy Jamieson

Background and Objectives: Social interactions involving personal disclosures are ubiquitous in social life and have important relational implications. A large body of research has documented positive outcomes from fruitful social interactions with amicable individuals, but less is known about how self-disclosing interactions with inimical interaction partners impacts individuals. Design and Methods: Participants engaged in an immersive social interaction task with a confederate (thought to be another participant) trained to behave amicably (Fast Friends) or inimically (Fast Foes). Cardiovascular responses were measured during the interaction and behavioral displays coded. Participants also reported on their subjective experiences of the interaction. Results: Participants assigned to interact in the Fast Foes condition reported more negative affect and threat appraisals, displayed more negative behaviors (i.e., agitation and anxiety), and exhibited physiological threat responses (and lower cardiac output in particular) compared to participants assigned to the Fast Friends condition. Conclusions: The novel paradigm demonstrates differential stress and affective outcomes between positive and negative self-disclosure situations across multiple channels, providing a more nuanced understanding of the processes associated with disclosing information about the self in social contexts.


2017 ◽  
Vol 114 (23) ◽  
pp. 5982-5987 ◽  
Author(s):  
Mark A. Thornton ◽  
Diana I. Tamir

Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.


Apidologie ◽  
2021 ◽  
Author(s):  
Sylwia Łopuch ◽  
Adam Tofilski

AbstractVibro-acoustic communication is used by honey bees in many different social contexts. Our previous research showed that workers interact with their queen outside of the swarming period by means of wing-beating behaviour. Therefore, the aim of this study was to verify the hypothesis that the wing-beating behaviour of workers attending the queen stimulates her to lay eggs. The behaviour of workers and the queen was recorded using a high-speed camera, at first in the presence of uncapped brood in the nest and then without one. None of the queens performed wing-beating behaviour. On the other hand, the workers attending the queen demonstrated this behaviour two times per minute, on average, even in the presence of uncapped brood in the nest. After removing the combs with the uncapped brood, the incidence of wing-beating behaviour increased significantly to an average of four times per minute. Wing-beating behaviour did not differ significantly in its characteristics when uncapped brood was present or absent in the nest. During 3 days after removing the combs with the uncapped brood, there was no significant increase in the rate of egg lying by the queen. Therefore, the results presented here do not convincingly confirm that the wing-beating behaviour of workers affects the rate of queen's egg-lying. This negative result can be related to colony disturbance and longer time required by the queen to increase egg production.


2020 ◽  
Vol 287 (1941) ◽  
pp. 20202531
Author(s):  
Julia Fischer ◽  
Franziska Wegdell ◽  
Franziska Trede ◽  
Federica Dal Pesco ◽  
Kurt Hammerschmidt

The extent to which nonhuman primate vocalizations are amenable to modification through experience is relevant for understanding the substrate from which human speech evolved. We examined the vocal behaviour of Guinea baboons, Papio papio , ranging in the Niokolo Koba National Park in Senegal. Guinea baboons live in a multi-level society, with units nested within parties nested within gangs. We investigated whether the acoustic structure of grunts of 27 male baboons of two gangs varied with party/gang membership and genetic relatedness. Males in this species are philopatric, resulting in increased male relatedness within gangs and parties. Grunts of males that were members of the same social levels were more similar than those of males in different social levels ( N = 351 dyads for comparison within and between gangs, and N = 169 dyads within and between parties), but the effect sizes were small. Yet, acoustic similarity did not correlate with genetic relatedness, suggesting that higher amounts of social interactions rather than genetic relatedness promote the observed vocal convergence. We consider this convergence a result of sensory–motor integration and suggest this to be an implicit form of vocal learning shared with humans, in contrast to the goal-directed and intentional explicit form of vocal learning unique to human speech acquisition.


Author(s):  
NAMRATA PAWAR ◽  
SONALI CHIKHALE

With the development of wireless communication, the popularity of android phones, the increasing of social networking services, mobile social networking has become a hot research topic. Personal mobile devices have become ubiquitous and an inseparable part of our daily lives. These devices have evolved rapidly from simple phones and SMS capable devices to Smartphone’s and now with android phones that we use to connect, interact and share information with our social circles. The Smartphone’s are used for traditional two-way messaging such as voice, SMS, multimedia messages, instant messaging or email. Moreover, the recent advances in the mobile application development frameworks and application stores have encouraged third party developers to create a huge number of mobile applications that allow users to interact and share information in many novel ways. In this paper, we elaborate a flexible system architecture based on the service-oriented specification to support social interactions in campus-wide environments using Wifi. In the client side, we designed a mobile middleware to collect social contexts such as the messaging, creating group, accessing emails etc. The server backend, on the other hand, aggregates such contexts, analyses social connections among users and provides social services to facilitate social interactions. A prototype of mobile social networking system is deployed on campus, and several applications are implemented based on the proposed architecture to demonstrate the effectiveness of the architecture.


2021 ◽  
Vol 44 (1) ◽  
pp. 475-493
Author(s):  
Catherine J. Stoodley ◽  
Peter T. Tsai

Social interactions involve processes ranging from face recognition to understanding others’ intentions. To guide appropriate behavior in a given context, social interactions rely on accurately predicting the outcomes of one's actions and the thoughts of others. Because social interactions are inherently dynamic, these predictions must be continuously adapted. The neural correlates of social processing have largely focused on emotion, mentalizing, and reward networks, without integration of systems involved in prediction. The cerebellum forms predictive models to calibrate movements and adapt them to changing situations, and cerebellar predictive modeling is thought to extend to nonmotor behaviors. Primary cerebellar dysfunction can produce social deficits, and atypical cerebellar structure and function are reported in autism, which is characterized by social communication challenges and atypical predictive processing. We examine the evidence that cerebellar-mediated predictions and adaptation play important roles in social processes and argue that disruptions in these processes contribute to autism.


Sign in / Sign up

Export Citation Format

Share Document