scholarly journals Frontal cortex activity underlying the production of diverse vocal signals during social communication in marmoset monkeys

2021 ◽  
Author(s):  
Lingyun Zhao ◽  
Xiaoqin Wang

Vocal communication is essential for social behaviors in humans and many non-human primates. While the frontal cortex has been shown to play a crucial role in human speech production, its role in vocal production in non-human primates has long been questioned. Recent studies have shown activation in single neurons in the monkey frontal cortex during vocal production in relatively isolated environment. However, little is known about how the frontal cortex is engaged in vocal production in ethologically relevant social context, where different types of vocal signals are produced for various communication purposes. Here we studied single neuron activities and local field potentials (LFP) and in the frontal cortex of marmoset monkeys while the animal engaged in vocal exchanges with other conspecifics in a social environment. Marmosets most frequently produced four types of vocalizations with distinct acoustic structures, three of which were typically not produced in isolation. We found that both single neuron activities and LFP were modulated by the production of each of the four call types. Moreover, the neural modulations in the frontal cortex showed distinct patterns for different call types, suggesting a representation of vocal signal features. In addition, we found that theta-band LFP oscillations were phase-locked to the phrases of twitter calls, which indicates the coordination of temporal structures of vocalizations. Our results suggested important functions of the marmoset frontal cortex in supporting the production of diverse vocalizations in vocal communication.

Author(s):  
Theresa Matzinger ◽  
W. Tecumseh Fitch

Voice modulatory cues such as variations in fundamental frequency, duration and pauses are key factors for structuring vocal signals in human speech and vocal communication in other tetrapods. Voice modulation physiology is highly similar in humans and other tetrapods due to shared ancestry and shared functional pressures for efficient communication. This has led to similarly structured vocalizations across humans and other tetrapods. Nonetheless, in their details, structural characteristics may vary across species and languages. Because data concerning voice modulation in non-human tetrapod vocal production and especially perception are relatively scarce compared to human vocal production and perception, this review focuses on voice modulatory cues used for speech segmentation across human languages, highlighting comparative data where available. Cues that are used similarly across many languages may help indicate which cues may result from physiological or basic cognitive constraints, and which cues may be employed more flexibly and are shaped by cultural evolution. This suggests promising candidates for future investigation of cues to structure in non-human tetrapod vocalizations. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.


2014 ◽  
Vol 112 (6) ◽  
pp. 1584-1598 ◽  
Author(s):  
Marino Pagan ◽  
Nicole C. Rust

The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance ( d′).


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adam R. Fishbein ◽  
Nora H. Prior ◽  
Jane A. Brown ◽  
Gregory F. Ball ◽  
Robert J. Dooling

AbstractStudies of acoustic communication often focus on the categories and units of vocalizations, but subtle variation also occurs in how these signals are uttered. In human speech, it is not only phonemes and words that carry information but also the timbre, intonation, and stress of how speech sounds are delivered (often referred to as “paralinguistic content”). In non-human animals, variation across utterances of vocal signals also carries behaviorally relevant information across taxa. However, the discriminability of these cues has been rarely tested in a psychophysical paradigm. Here, we focus on acoustic communication in the zebra finch (Taeniopygia guttata), a songbird species in which the male produces a single stereotyped motif repeatedly in song bouts. These motif renditions, like the song repetitions of many birds, sound very similar to the casual human listener. In this study, we show that zebra finches can easily discriminate between the renditions, even at the level of single song syllables, much as humans can discriminate renditions of speech sounds. These results support the notion that sensitivity to fine acoustic details may be a primary channel of information in zebra finch song, as well as a shared, foundational property of vocal communication systems across species.


2005 ◽  
Vol 14 (3) ◽  
pp. 126-130 ◽  
Author(s):  
Klaus Zuberbühler

The anatomy of the nonhuman primate vocal tract is not fundamentally different from the human one. Notwithstanding, nonhuman primates are remarkably unskillful at controlling vocal production and at combining basic call units into more complex strings. Instead, their vocal behavior is linked to specific psychological states, which are evoked by events in their social or physical environment. Humans are the only primates that have evolved the ability to produce elaborate and willfully controlled vocal signals, although this may have been a fairly recent invention. Despite their expressive limitations, nonhuman primates have demonstrated a surprising degree of cognitive complexity when responding to other individuals' vocalizations, suggesting that, as recipients, crucial linguistic abilities are part of primate cognition. Pivotal aspects of language comprehension, particularly the ability to process semantic content, may thus be part of our primate heritage. The strongest evidence currently comes from Old World monkeys, but recent work indicates that these capacities may also be present in our closest relatives, the chimpanzees.


2019 ◽  
Vol 94 (Suppl. 1-4) ◽  
pp. 51-60
Author(s):  
Julie E. Elie ◽  
Susanne Hoffmann ◽  
Jeffery L. Dunning ◽  
Melissa J. Coleman ◽  
Eric S. Fortune ◽  
...  

Acoustic communication signals are typically generated to influence the behavior of conspecific receivers. In songbirds, for instance, such cues are routinely used by males to influence the behavior of females and rival males. There is remarkable diversity in vocalizations across songbird species, and the mechanisms of vocal production have been studied extensively, yet there has been comparatively little emphasis on how the receiver perceives those signals and uses that information to direct subsequent actions. Here, we emphasize the receiver as an active participant in the communication process. The roles of sender and receiver can alternate between individuals, resulting in an emergent feedback loop that governs the behavior of both. We describe three lines of research that are beginning to reveal the neural mechanisms that underlie the reciprocal exchange of information in communication. These lines of research focus on the perception of the repertoire of songbird vocalizations, evaluation of vocalizations in mate choice, and the coordination of duet singing.


2015 ◽  
Vol 114 (2) ◽  
pp. 1158-1171 ◽  
Author(s):  
Cory T. Miller ◽  
A. Wren Thomas ◽  
Samuel U. Nummela ◽  
Lisa A. de la Mothe

The role of primate frontal cortex in vocal communication and its significance in language evolution have a controversial history. While evidence indicates that vocalization processing occurs in ventrolateral prefrontal cortex neurons, vocal-motor activity has been conjectured to be primarily subcortical and suggestive of a distinctly different neural architecture from humans. Direct evidence of neural activity during natural vocal communication is limited, as previous studies were performed in chair-restrained animals. Here we recorded the activity of single neurons across multiple regions of prefrontal and premotor cortex while freely moving marmosets engaged in a natural vocal behavior known as antiphonal calling. Our aim was to test whether neurons in marmoset frontal cortex exhibited responses during vocal-signal processing and/or vocal-motor production in the context of active, natural communication. We observed motor-related changes in single neuron activity during vocal production, but relatively weak sensory responses for vocalization processing during this natural behavior. Vocal-motor responses occurred both prior to and during call production and were typically coupled to the timing of each vocalization pulse. Despite the relatively weak sensory responses a population classifier was able to distinguish between neural activity that occurred during presentations of vocalization stimuli that elicited an antiphonal response and those that did not. These findings are suggestive of the role that nonhuman primate frontal cortex neurons play in natural communication and provide an important foundation for more explicit tests of the functional contributions of these neocortical areas during vocal behaviors.


2010 ◽  
Vol 76 (5) ◽  
pp. 709-734
Author(s):  
I. S. DMITRIENKO

AbstractWe describe the spatio-temporal evolution of one-dimensional Alfven resonance disturbance in the presence of various factors of resonance detuning: dispersion and absorption of Alfven disturbance, nonstationarity of large-scale wave generating resonant disturbance. Using analytical solutions to the resonance equation, we determine conditions for forming qualitatively different spatial and temporal structures of resonant Alfven disturbances. We also present analytical descriptions of quasi-stationary and non-stationary spatial structures formed in the resonant layer, and their evolution over time for cases of drivers of different types corresponding to large-scale waves localized in the direction of inhomogeneity and to nonlocalized large-scale waves.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7973
Author(s):  
Shengli Zhang ◽  
Jifei Pan ◽  
Zhenzhong Han ◽  
Linqing Guo

Signal features can be obscured in noisy environments, resulting in low accuracy of radar emitter signal recognition based on traditional methods. To improve the ability of learning features from noisy signals, a new radar emitter signal recognition method based on one-dimensional (1D) deep residual shrinkage network (DRSN) is proposed, which offers the following advantages: (i) Unimportant features are eliminated using the soft thresholding function, and the thresholds are automatically set based on the attention mechanism; (ii) without any professional knowledge of signal processing or dimension conversion of data, the 1D DRSN can automatically learn the features characterizing the signal directly from the 1D data and achieve a high recognition rate for noisy signals. The effectiveness of the 1D DRSN was experimentally verified under different types of noise. In addition, comparison with other deep learning methods revealed the superior performance of the DRSN. Last, the mechanism of eliminating redundant features using the soft thresholding function was analyzed.


Author(s):  
Roza G. Kamiloğlu ◽  
Disa A. Sauter

The voice is a prime channel of communication in humans and other animals. Voices convey many kinds of information, including physical characteristics like body size and sex, as well as providing cues to the vocalizing individual’s identity and emotional state. Vocalizations are produced by dynamic modifications of the physiological vocal production system. The source-filter theory explains how vocalizations are produced in two stages: (a) the production of a sound source in the larynx, and (b) the filtering of that sound by the vocal tract. This two-stage process largely applies to all primate vocalizations. However, there are some differences between the vocal production apparatus of humans as compared to nonhuman primates, such as the lower position of the larynx and lack of air sacs in humans. Thanks to our flexible vocal apparatus, humans can produce a range of different types of vocalizations, including spoken language, nonverbal vocalizations, whispering, and singing. A comprehensive understanding of vocal communication takes both production and perception of vocalizations into account. Internal processes are expressed in the form of specific acoustic patterns in the producer’s voice. In order to communicate information in vocalizations, those acoustic patterns must be acoustically registered by listeners via auditory perception mechanisms. Both production and perception of vocalizations are affected by psychobiological mechanisms as well as sociocultural factors. Furthermore, vocal production and perception can be impaired by a range of different disorders. Vocal production and hearing disorders, as well as mental disorders including autism spectrum disorder, depression, and schizophrenia, affect vocal communication.


Neuron ◽  
2019 ◽  
Vol 101 (1) ◽  
pp. 165-177.e5 ◽  
Author(s):  
Zhongzheng Fu ◽  
Daw-An J. Wu ◽  
Ian Ross ◽  
Jeffrey M. Chung ◽  
Adam N. Mamelak ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document