scholarly journals Spatial and Temporal Organization of Composite Receptive Fields in the Songbird Auditory Forebrain

2021 ◽  
Author(s):  
Nasim Winchester Vahidi

The mechanisms underlying how single auditory neurons and neuron populations encode natural and acoustically complex vocal signals, such as human speech or bird songs, are not well understood. Classical models focus on individual neurons, whose spike rates vary systematically as a function of change in a small number of simple acoustic dimensions. However, neurons in the caudal medial nidopallium (NCM), an auditory forebrain region in songbirds that is analogous to the secondary auditory cortex in mammals, have composite receptive fields (CRFs) that comprise multiple acoustic features tied to both increases and decreases in firing rates. Here, we investigated the anatomical organization and temporal activation patterns of auditory CRFs in European starlings exposed to natural vocal communication signals (songs). We recorded extracellular electrophysiological responses to various bird songs at auditory NCM sites, including both single and multiple neurons, and we then applied a quadratic model to extract large sets of CRF features that were tied to excitatory and suppressive responses at each measurement site. We found that the superset of CRF features yielded spatially and temporally distributed, generalizable representations of a conspecific song. Individual sites responded to acoustically diverse features, as there was no discernable organization of features across anatomically ordered sites. The CRF features at each site yielded broad, temporally distributed responses that spanned the entire duration of many starling songs, which can last for 50 s or more. Based on these results, we estimated that a nearly complete representation of any conspecific song, regardless of length, can be obtained by evaluating populations as small as 100 neurons. We conclude that natural acoustic communication signals drive a distributed yet highly redundant representation across the songbird auditory forebrain, in which adjacent neurons contribute to the encoding of multiple diverse and time-varying spectro-temporal features.

2009 ◽  
Vol 2 ◽  
pp. 117906950900200 ◽  
Author(s):  
Jin Kwon Jeong ◽  
Liisa A. Tremere ◽  
Michael J. Ryave ◽  
Victor C. Vuong ◽  
Raphael Pinaud

Recent studies on the anatomical and functional organization of GABAergic networks in central auditory circuits of the zebra finch have highlighted the strong impact of inhibitory mechanisms on both the central encoding and processing of acoustic information in a vocal learning species. Most of this work has focused on the caudomedial nidopallium (NCM), a forebrain area postulated to be the songbird analogue of the mammalian auditory association cortex. NCM houses neurons with selective responses to conspecific songs and is a site thought to house auditory memories required for vocal learning and, likely, individual identification. Here we review our recent work on the anatomical distribution of GABAergic cells in NCM, their engagement in response to song and the roles for inhibitory transmission in the physiology of NCM at rest and during the processing of natural communication signals. GABAergic cells are highly abundant in the songbird auditory forebrain and account for nearly half of the overall neuronal population in NCM with a large fraction of these neurons activated by song in freely-behaving animals. GABAergic synapses provide considerable local, tonic inhibition to NCM neurons at rest and, during sound processing, may contain the spread of excitation away from un-activated or quiescent parts of the network. Finally, we review our work showing that GABAA-mediated inhibition directly regulates the temporal organization of song-driven responses in awake songbirds, and appears to enhance the reliability of auditory encoding in NCM.


1992 ◽  
Vol 02 (03) ◽  
pp. 451-482 ◽  
Author(s):  
WALTER J. FREEMAN

Those classical models are reviewed that are most widely used by neurobiologists to explain the dynamics of neurons and neuron populations, and by modelers to implement artificial neural networks. Each neuron has input fibers called dendrites that integrate and an axon that transmits the output. The differing fiber architectures reflect these dissimilar dynamic operations. The basic tools to describe them are the RC model of the membrane, the core conductor model of the fibers, the Hodgkin–Huxley model of the trigger zone, and the modifiable synapse. Populations additionally require description of macroscopic state variables, the types of nonlinearity (most importantly the sigmoid curve and the dynamic range compression at the input to the cortex), and the types and strengths of connections. The properties of these neural masses can be characterized with the tools of nonlinear dynamics. These include description of point, limit cycle, and chaotic attractors for the cerebal cortex, as well as the types and mechanisms for the state transitions between basins of attraction during learning and perception.


2019 ◽  
Vol 94 (Suppl. 1-4) ◽  
pp. 51-60
Author(s):  
Julie E. Elie ◽  
Susanne Hoffmann ◽  
Jeffery L. Dunning ◽  
Melissa J. Coleman ◽  
Eric S. Fortune ◽  
...  

Acoustic communication signals are typically generated to influence the behavior of conspecific receivers. In songbirds, for instance, such cues are routinely used by males to influence the behavior of females and rival males. There is remarkable diversity in vocalizations across songbird species, and the mechanisms of vocal production have been studied extensively, yet there has been comparatively little emphasis on how the receiver perceives those signals and uses that information to direct subsequent actions. Here, we emphasize the receiver as an active participant in the communication process. The roles of sender and receiver can alternate between individuals, resulting in an emergent feedback loop that governs the behavior of both. We describe three lines of research that are beginning to reveal the neural mechanisms that underlie the reciprocal exchange of information in communication. These lines of research focus on the perception of the repertoire of songbird vocalizations, evaluation of vocalizations in mate choice, and the coordination of duet singing.


2015 ◽  
Vol 76 (1) ◽  
pp. 47-63 ◽  
Author(s):  
Laura E. Matheson ◽  
Herie Sun ◽  
Jon T. Sakata

2021 ◽  
Vol 15 ◽  
Author(s):  
Tim Sainburg ◽  
Timothy Q. Gentner

Recently developed methods in computational neuroethology have enabled increasingly detailed and comprehensive quantification of animal movements and behavioral kinematics. Vocal communication behavior is well poised for application of similar large-scale quantification methods in the service of physiological and ethological studies. This review describes emerging techniques that can be applied to acoustic and vocal communication signals with the goal of enabling study beyond a small number of model species. We review a range of modern computational methods for bioacoustics, signal processing, and brain-behavior mapping. Along with a discussion of recent advances and techniques, we include challenges and broader goals in establishing a framework for the computational neuroethology of vocal communication.


2010 ◽  
Vol 103 (4) ◽  
pp. 1785-1797 ◽  
Author(s):  
Jason V. Thompson ◽  
Timothy Q. Gentner

Learning typically increases the strength of responses and the number of neurons that respond to training stimuli. Few studies have explored representational plasticity using natural stimuli, however, leaving unknown the changes that accompany learning under more realistic conditions. Here, we examine experience-dependent plasticity in European starlings, a songbird with rich acoustic communication signals tied to robust, natural recognition behaviors. We trained starlings to recognize conspecific songs and recorded the extracellular spiking activity of single neurons in the caudomedial nidopallium (NCM), a secondary auditory forebrain region analogous to mammalian auditory cortex. Training induced a stimulus-specific weakening of the neural responses (lower spike rates) to the learned songs, whereas the population continued to respond robustly to unfamiliar songs. Additional experiments rule out stimulus-specific adaptation and general biases for novel stimuli as explanations of these effects. Instead, the results indicate that associative learning leads to single neuron responses in which both irrelevant and unfamiliar stimuli elicit more robust responses than behaviorally relevant natural stimuli. Detailed analyses of these effects at a finer temporal scale point to changes in the number of motifs eliciting excitatory responses above a neuron's spontaneous discharge rate. These results show a novel form of experience-dependent plasticity in the auditory forebrain that is tied to associative learning and in which the overall strength of responses is inversely related to learned behavioral significance.


2001 ◽  
Vol 61 (4) ◽  
pp. 805-817 ◽  
Author(s):  
David S. Vicario ◽  
Nasir H. Naqvi ◽  
Jonathan N. Raksin

Author(s):  
Mimi L. Phan ◽  
Mark M. Gergues ◽  
Shafali Mahidadia ◽  
Jorge Jimenez-Castillo ◽  
David S. Vicario ◽  
...  

2015 ◽  
Vol 41 (9) ◽  
pp. 1180-1194 ◽  
Author(s):  
Laura E. Matheson ◽  
Jon T. Sakata

2019 ◽  
Author(s):  
Tim Sainburg ◽  
Marvin Thielk ◽  
Timothy Q Gentner

ABSTRACTAnimals produce vocalizations that range in complexity from a single repeated call to hundreds of unique vocal elements patterned in sequences unfolding over hours. Characterizing complex vocalizations can require considerable effort and a deep intuition about each species’ vocal behavior. Even with a great deal of experience, human characterizations of animal communication can be affected by human perceptual biases. We present here a set of computational methods that center around projecting animal vocalizations into low dimensional latent representational spaces that are directly learned from data. We apply these methods to diverse datasets from over 20 species, including humans, bats, songbirds, mice, cetaceans, and nonhuman primates, enabling high-powered comparative analyses of unbiased acoustic features in the communicative repertoires across species. Latent projections uncover complex features of data in visually intuitive and quantifiable ways. We introduce methods for analyzing vocalizations as both discrete sequences and as continuous latent variables. Each method can be used to disentangle complex spectro-temporal structure and observe long-timescale organization in communication. Finally, we show how systematic sampling from latent representational spaces of vocalizations enables comprehensive investigations of perceptual and neural representations of complex and ecologically relevant acoustic feature spaces.


Sign in / Sign up

Export Citation Format

Share Document