scholarly journals Encoding of ultrasonic vocalizations in the auditory cortex

2013 ◽  
Vol 109 (7) ◽  
pp. 1912-1927 ◽  
Author(s):  
Isaac M. Carruthers ◽  
Ryan G. Natan ◽  
Maria N. Geffen

One of the central tasks of the mammalian auditory system is to represent information about acoustic communicative signals, such as vocalizations. However, the neuronal computations underlying vocalization encoding in the central auditory system are poorly understood. To learn how the rat auditory cortex encodes information about conspecific vocalizations, we presented a library of natural and temporally transformed ultrasonic vocalizations (USVs) to awake rats while recording neural activity in the primary auditory cortex (A1) with chronically implanted multielectrode probes. Many neurons reliably and selectively responded to USVs. The response strength to USVs correlated strongly with the response strength to frequency-modulated (FM) sweeps and the FM rate tuning index, suggesting that related mechanisms generate responses to USVs as to FM sweeps. The response strength further correlated with the neuron's best frequency, with the strongest responses produced by neurons whose best frequency was in the ultrasonic frequency range. For responses of each neuron to each stimulus group, we fitted a novel predictive model: a reduced generalized linear-nonlinear model (GLNM) that takes the frequency modulation and single-tone amplitude as the only two input parameters. The GLNM accurately predicted neuronal responses to previously unheard USVs, and its prediction accuracy was higher than that of an analogous spectrogram-based linear-nonlinear model. The response strength of neurons and the model prediction accuracy were higher for original, rather than temporally transformed, vocalizations. These results indicate that A1 processes original USVs differentially than transformed USVs, indicating preference for temporal statistics of the original vocalizations.

eLife ◽  
2014 ◽  
Vol 3 ◽  
Author(s):  
Rajnish P Rao ◽  
Falk Mielke ◽  
Evgeny Bobrov ◽  
Michael Brecht

Social interactions involve multi-modal signaling. Here, we study interacting rats to investigate audio-haptic coordination and multisensory integration in the auditory cortex. We find that facial touch is associated with an increased rate of ultrasonic vocalizations, which are emitted at the whisking rate (∼8 Hz) and preferentially initiated in the retraction phase of whisking. In a small subset of auditory cortex regular-spiking neurons, we observed excitatory and heterogeneous responses to ultrasonic vocalizations. Most fast-spiking neurons showed a stronger response to calls. Interestingly, facial touch-induced inhibition in the primary auditory cortex and off-responses after termination of touch were twofold stronger than responses to vocalizations. Further, touch modulated the responsiveness of auditory cortex neurons to ultrasonic vocalizations. In summary, facial touch during social interactions involves precisely orchestrated calling-whisking patterns. While ultrasonic vocalizations elicited a rather weak population response from the regular spikers, the modulation of neuronal responses by facial touch was remarkably strong.


2008 ◽  
Vol 100 (3) ◽  
pp. 1622-1634 ◽  
Author(s):  
Ling Qin ◽  
JingYu Wang ◽  
Yu Sato

Previous studies in anesthetized animals reported that the primary auditory cortex (A1) showed homogenous phasic responses to FM tones, namely a transient response to a particular instantaneous frequency when FM sweeps traversed a neuron's tone-evoked receptive field (TRF). Here, in awake cats, we report that A1 cells exhibit heterogeneous FM responses, consisting of three patterns. The first is continuous firing when a slow FM sweep traverses the receptive field of a cell with a sustained tonal response. The duration and amplitude of FM response decrease with increasing sweep speed. The second pattern is transient firing corresponding to the cell's phasic tonal response. This response could be evoked only by a fast FM sweep through the cell's TRF, suggesting a preference for fast FM. The third pattern was associated with the off response to pure tones and was composed of several discrete response peaks during slow FM stimulus. These peaks were not predictable from the cell's tonal response but reliably reflected the time when FM swept across specific frequencies. Our A1 samples often exhibited a complex response pattern, combining two or three of the basic patterns above, resulting in a heterogeneous response population. The diversity of FM responses suggests that A1 use multiple mechanisms to fully represent the whole range of FM parameters, including frequency extent, sweep speed, and direction.


2015 ◽  
Vol 113 (2) ◽  
pp. 475-486
Author(s):  
Melanie A. Kok ◽  
Daniel Stolzberg ◽  
Trecia A. Brown ◽  
Stephen G. Lomber

Current models of hierarchical processing in auditory cortex have been based principally on anatomical connectivity while functional interactions between individual regions have remained largely unexplored. Previous cortical deactivation studies in the cat have addressed functional reciprocal connectivity between primary auditory cortex (A1) and other hierarchically lower level fields. The present study sought to assess the functional contribution of inputs along multiple stages of the current hierarchical model to a higher order area, the dorsal zone (DZ) of auditory cortex, in the anaesthetized cat. Cryoloops were placed over A1 and posterior auditory field (PAF). Multiunit neuronal responses to noise burst and tonal stimuli were recorded in DZ during cortical deactivation of each field individually and in concert. Deactivation of A1 suppressed peak neuronal responses in DZ regardless of stimulus and resulted in increased minimum thresholds and reduced absolute bandwidths for tone frequency receptive fields in DZ. PAF deactivation had less robust effects on DZ firing rates and receptive fields compared with A1 deactivation, and combined A1/PAF cooling was largely driven by the effects of A1 deactivation at the population level. These results provide physiological support for the current anatomically based model of both serial and parallel processing schemes in auditory cortical hierarchical organization.


2015 ◽  
Vol 282 (1811) ◽  
pp. 20151203 ◽  
Author(s):  
Gregory S. Berns ◽  
Peter F. Cook ◽  
Sean Foxley ◽  
Saad Jbabdi ◽  
Karla L. Miller ◽  
...  

The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of ‘associative′ regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin ( Delphinus delphis ) and a pantropical dolphin ( Stenella attenuata ) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species.


2013 ◽  
Vol 110 (5) ◽  
pp. 1087-1096 ◽  
Author(s):  
Heesoo Kim ◽  
Shaowen Bao

Cortical sensory representation is highly adaptive to the environment, and prevalent or behaviorally important stimuli are often overrepresented. One class of such stimuli is species-specific vocalizations. Rats vocalize in the ultrasonic range >30 kHz, but cortical representation of this frequency range has not been systematically examined. We recorded in vivo cortical electrophysiological responses to ultrasonic pure-tone pips, natural ultrasonic vocalizations, and pitch-shifted vocalizations to assess how rats represent this ethologically relevant frequency range. We find that nearly 40% of the primary auditory cortex (AI) represents an octave-wide band of ultrasonic vocalization frequencies (UVFs; 32–64 kHz) compared with <20% for other octave bands <32 kHz. These UVF neurons respond preferentially and reliably to ultrasonic vocalizations. The UVF overrepresentation matures in the cortex before it develops in the central nucleus of inferior colliculus, suggesting a cortical origin and corticofugal influences. Furthermore, the development of cortical UVF overrepresentation depends on early acoustic experience. These results indicate that natural sensory experience causes large-scale cortical map reorganization and improves representations of species-specific vocalizations.


2002 ◽  
Vol 88 (5) ◽  
pp. 2684-2699 ◽  
Author(s):  
Dennis L. Barbour ◽  
Xiaoqin Wang

Natural sounds often contain energy over a broad spectral range and consequently overlap in frequency when they occur simultaneously; however, such sounds under normal circumstances can be distinguished perceptually (e.g., the cocktail party effect). Sound components arising from different sources have distinct (i.e., incoherent) modulations, and incoherence appears to be one important cue used by the auditory system to segregate sounds into separately perceived acoustic objects. Here we show that, in the primary auditory cortex of awake marmoset monkeys, many neurons responsive to amplitude- or frequency-modulated tones at a particular carrier frequency [the characteristic frequency (CF)] also demonstrate sensitivity to the relative modulation phase between two otherwise identically modulated tones: one at CF and one at a different carrier frequency. Changes in relative modulation phase reflect alterations in temporal coherence between the two tones, and the most common neuronal response was found to be a maximum of suppression for the coherent condition. Coherence sensitivity was generally found in a narrow frequency range in the inhibitory portions of the frequency response areas (FRA), indicating that only some off-CF neuronal inputs into these cortical neurons interact with on-CF inputs on the same time scales. Over the population of neurons studied, carrier frequencies showing coherence sensitivity were found to coincide with the carrier frequencies of inhibition, implying that inhibitory inputs create the effect. The lack of strong coherence-induced facilitation also supports this interpretation. Coherence sensitivity was found to be greatest for modulation frequencies of 16–128 Hz, which is higher than the phase-locking capability of most cortical neurons, implying that subcortical neurons could play a role in the phenomenon. Collectively, these results reveal that auditory cortical neurons receive some off-CF inputs temporally matched and some temporally unmatched to the on-CF input(s) and respond in a fashion that could be utilized by the auditory system to segregate natural sounds containing similar spectral components (such as vocalizations from multiple conspecifics) based on stimulus coherence.


2021 ◽  
Author(s):  
Swapna Agarwalla ◽  
Sharba Bandyopadhyay

Syllable sequences in male mouse ultrasonic-vocalizations (USVs), songs, contain structure - quantified through predictability, like birdsong and aspects of speech. Apparent USV innateness and lack of learnability, discount mouse USVs for modelling speech-like social communication and its deficits. Informative contextual natural sequences (SN) were theoretically extracted and they were preferred by female mice. Primary auditory cortex (A1) supragranular neurons show differential selectivity to the same syllables in SN and random sequences (SR). Excitatory neurons (EXNs) in females showed increases in selectivity to whole SNs over SRs based on extent of social exposure with male, but syllable selectivity remained unchanged. Thus mouse A1 single neurons adaptively represent entire order of acoustic units without altering selectivity of individual units, fundamental to speech perception. Additionally, observed plasticity was replicated with silencing of somatostatin positive neurons, which had plastic effects opposite to EXNs, thus pointing out possible pathways involved in perception of sound sequences.


Author(s):  
Israel Nelken

Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds. Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.


1993 ◽  
Vol 69 (2) ◽  
pp. 449-461 ◽  
Author(s):  
M. N. Semple ◽  
L. M. Kitzes

1. Single-neuron responses were recorded in high-frequency regions of primary auditory cortex (AI) of anesthetized cats. Best-frequency tone pips were presented to each ear independently via sealed stimulus delivery systems, and the sound pressure level (SPL) at each ear was independently manipulated. Each neuron was studied with many dichotic combinations of SPL, chosen to incorporate a broad range of the two synthetic interaural level variables, interaural level difference (ILD) and average binaural level (ABL). This paper illustrates the common forms of binaural SPL selectivity observed in a sample of 204 single neurons located in AI. 2. Most neurons (> 90%) were jointly influenced by ILD and ABL. A small proportion of bilaterally excitable (EE) neurons responded to ABL rather independently of ILD. Only one neuron was determined to respond to ILD independently of ABL. 3. Nonmonotonic selectivity for one or both of the binaural level cues was evident in > 60% of our sample. Within the most effective range of ILD values, response strength was usually related nonmonotonically to related both to ILD and ABL. We have described units exhibiting this kind of dual nonmonotonic selectivity for the two binaural variables as being influenced by a Two-Way Intensity Network (TWIN). 4. Each of the response forms identified in an earlier study of the gerbil inferior colliculus were found in this study of cat auditory cortex. However the classes were evident in markedly different proportions. In particular, TWIN responses alone accounted for 36.2% of the sample, nearly four times the proportion found in the inferior colliculus in a previous study. 5. Units with similar binaural responses do not necessarily have similar monaural properties. For example, the typically nonmonotonic relation between response strength and ABL was often observed in the absence of a monaurally demonstrable nonmonotonicity. There is no simple relation between a neuron's classification according to the sign of monaural influence and its response to ILD and ABL. In particular, EE neurons exhibited remarkably diverse binaural properties. 6. Since responses of nearly all AI neurons are influenced jointly by ABL and ILD, we contend that single neurons in primary auditory cortex are not specifically tuned to either cue. ILD and ABL are mathematical expressions relating the SPLs at the two ears to each other (as the difference and average, respectively) and any such combination is expressed most simply as a particular combination of SPL at each ear.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document