scholarly journals Population Responses Represent Vocalization Identity, Intensity, and Signal-to-Noise Ratio in Primary Auditory Cortex

2019 ◽  
Author(s):  
Ruiye Ni ◽  
David A. Bender ◽  
Dennis L. Barbour

AbstractThe ability to process speech signals under challenging listening environments is critical for speech perception. Great efforts have been made to reveal the underlying single unit encoding mechanism. However, big variability is usually discovered in single-unit responses, and the population coding mechanism is yet to be revealed. In this study, we are aimed to study how a population of neurons encodes behaviorally relevant signals subjective to change in intensity and signal-noise-ratio (SNR). We recorded single-unit activity from the primary auditory cortex of awake common marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise (WGN) and vocalization babble (Babble). By pooling all single units together, the pseudo-population analysis showed the population neural responses track intra- and inter-trajectory angle evolutions track vocalization identity and intensity/SNR, respectively. The ability of the trajectory to track the vocalizations attribute was degraded to a different degree by different noises. Discrimination of neural populations evaluated by neural response classifiers revealed that a finer optimal temporal resolution and longer time scale of temporal dynamics were needed for vocalizations in noise than vocalizations at multiple different intensities. The ability of population responses to discriminate between different vocalizations were mostly retained above the detection threshold.Significance StatementHow our brain excels in the challenge of precise acoustic signal encoding against noisy environment is of great interest for scientists. Relatively few studies have strived to tackle this mystery from the perspective of neural population responses. Population analysis reveals the underlying neural encoding mechanism of complex acoustic stimuli based upon a pool of single units via vector coding. We suggest the spatial population response vectors as one important way for neurons to integrate multiple attributes of natural acoustic signals, specifically, marmots’ vocalizations.

2009 ◽  
Vol 102 (6) ◽  
pp. 3329-3339 ◽  
Author(s):  
Nima Mesgarani ◽  
Stephen V. David ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

Population responses of cortical neurons encode considerable details about sensory stimuli, and the encoded information is likely to change with stimulus context and behavioral conditions. The details of encoding are difficult to discern across large sets of single neuron data because of the complexity of naturally occurring stimulus features and cortical receptive fields. To overcome this problem, we used the method of stimulus reconstruction to study how complex sounds are encoded in primary auditory cortex (AI). This method uses a linear spectro-temporal model to map neural population responses to an estimate of the stimulus spectrogram, thereby enabling a direct comparison between the original stimulus and its reconstruction. By assessing the fidelity of such reconstructions from responses to modulated noise stimuli, we estimated the range over which AI neurons can faithfully encode spectro-temporal features. For stimuli containing statistical regularities (typical of those found in complex natural sounds), we found that knowledge of these regularities substantially improves reconstruction accuracy over reconstructions that do not take advantage of this prior knowledge. Finally, contrasting stimulus reconstructions under different behavioral states showed a novel view of the rapid changes in spectro-temporal response properties induced by attentional and motivational state.


Author(s):  
Israel Nelken

Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds. Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.


1995 ◽  
Vol 73 (1) ◽  
pp. 227-245 ◽  
Author(s):  
J. J. Eggermont ◽  
G. M. Smith

1. We recorded responses from 136 single units and the corresponding local field potentials (LFPs) from the same electrode at 44 positions in the primary auditory cortex of 25 juvenile, ketamine-anesthetized cats in response to periodic click trains with click repetition rates between 1 and 32 Hz; to Poisson-distributed click trains with an average click rate of 4 Hz; and under spontaneous conditions. The aim of the study is to evaluate the synchrony between LFPs and single-unit responses, to compare their coding of periodic stimuli, and to elucidate mechanisms that limit this periodicity coding in primary auditory cortex. 2. We obtained averaged LFPs either as click-triggered averages, the classical evoked potentials, or as spike-triggered averages. We quantified LFPs by initial negative peak-to-positive peak amplitude. In addition, we obtained trigger events from negativegoing level crossings (at approximately 2 SD below the mean) of the 100-Hz low-pass electrode signal. We analyzed these LFP triggers similarly to single-unit spikes. 3. The average ratio of the LFP amplitude in response to the second click in a train and the LFP amplitude to the first click as a function of click rate was low-pass with a slight resonance at approximately 10 Hz, and, above that frequency, decreasing with a slope of approximately 24 dB/octave. We found the 50% point at approximately 16 Hz. In contrast, the LFP amplitude averaged over entire click trains was low-pass with a similar resonance but a high-frequency slope of 12 dB/octave and a 50% point at approximately 12 Hz. 4. The LFP amplitude for click repetition rates between 5 and 11 Hz often showed augmentation, i.e., the amplitude increased in response to the first few clicks in the train and thereafter decreased. This augmentation was paralleled by an increase in the probability of firing in single units simultaneously recorded on the same electrode. 5. We calculated temporal modulation transfer functions (tMTFs) for single-unit spikes and for LFP triggers. They were typically bandpass with a best modulating frequency of 10 Hz and similar shape for both single-unit spikes and LFP triggers. The tMTF per click, obtained by dividing the tMTF by the number of clicks in the train, was low-pass with a 50% cutoff frequency at approximately Hz, similar to that for the average LFP amplitude. 6. the close similarity of the tMTFs for single-unit spikes and LFP triggers suggests that single-unit tMTFs can be predicted from LFP level crossings.(ABSTRACT TRUNCATED AT 400 WORDS)


2003 ◽  
Vol 89 (6) ◽  
pp. 2889-2903 ◽  
Author(s):  
G. Christopher Stecker ◽  
Brian J. Mickey ◽  
Ewan A. Macpherson ◽  
John C. Middlebrooks

We compared the spatial tuning properties of neurons in two fields [primary auditory cortex (A1) and posterior auditory field (PAF)] of cat auditory cortex. Broadband noise bursts of 80-ms duration were presented from loudspeakers throughout 360° in the horizontal plane (azimuth) or 260° in the vertical median plane (elevation). Sound levels varied from 20 to 40 dB above units' thresholds. We recorded neural spike activity simultaneously from 16 sites in field PAF and/or A1 of α-chloralose-anesthetized cats. We assessed spatial sensitivity by examining the dependence of spike count and response latency on stimulus location. In addition, we used an artificial neural network (ANN) to assess the information about stimulus location carried by spike patterns of single units and of ensembles of 2–32 units. The results indicate increased spatial sensitivity, more uniform distributions of preferred locations, and greater tolerance to changes in stimulus intensity among PAF units relative to A1 units. Compared to A1 units, PAF units responded at significantly longer latencies, and latencies varied more strongly with stimulus location. ANN analysis revealed significantly greater information transmission by spike patterns of PAF than A1 units, primarily reflecting the information transmitted by latency variation in PAF. Finally, information rates grew more rapidly with the number of units included in neural ensembles for PAF than A1. The latter finding suggests more accurate population coding of space in PAF, made possible by a more diverse population of neural response types.


2021 ◽  
Author(s):  
Diana Amaro ◽  
Dardo N. Ferreiro ◽  
Benedikt Grothe ◽  
Michael Pecka

ABSTRACTLocalizing and identifying sensory objects during active navigation are fundamental brain functions. However, how individual objects are neuronally represented during self-motion is mostly unexplored. Here we show that active localization during unrestricted navigation promotes previously unreported spatial representations in primary auditory cortex. Spatial tuning differs between sources with distinct behavioral outcome associations, revealing a simultaneous population coding of egocentric source locations and angle-independent identification of individual sources during active sensing.


1993 ◽  
Vol 70 (2) ◽  
pp. 492-511 ◽  
Author(s):  
F. K. Samson ◽  
J. C. Clarey ◽  
P. Barone ◽  
T. J. Imig

1. Single-unit recordings were carried out in primary auditory cortex (AI) of barbiturate-anesthetized cats. Neurons, sensitive to sound direction in the horizontal plane (azimuth), were identified by their responses to noise bursts, presented in the free field, that varied in azimuth and sound pressure level (SPL). SPLs typically varied between 0 and 80 dB and were presented at each azimuth that was tested. Each azimuth-sensitive neuron responded well to some SPLs at certain azimuths and did not respond well to any SPL at other azimuths. This report describes AI neurons that were sensitive to the azimuth of monaurally presented noise bursts. 2. Unilateral ear plugging was used to test each azimuth-sensitive neuron's response to monaural stimulation. Ear plugs, produced by injecting a plastic ear mold compound into the concha and ear canal, attenuated sound reaching the tympanic membrane by 25-70 dB. Binaural interactions were inferred by comparing responses obtained under binaural (no plug) and monaural (ear plug) conditions. 3. Of the total sample of 131 azimuth-sensitive cells whose responses to ear plugging were studied, 27 were sensitive to the azimuth of monaurally presented noise bursts. We refer to these as monaural directional (MD) cells, and this report describes their properties. The remainder of the sample consisted of cells that either required binaural stimulation for azimuth sensitivity (63/131), because they were insensitive to azimuth under unilateral ear plug conditions or responded too unreliably to permit detailed conclusions regarding the effect of ear plugging (41/131). 4. Most (25/27) MD cells received either monaural input (MD-E0) or binaural excitatory/inhibitory input (MD-EI), as inferred from ear plugging. Two MD cells showed other characteristics. The contralateral ear was excitatory for 25/27 MD cells. 5. MD-E0 cells (22%, 6/27) were monaural. They were unaffected by unilateral ear plugging, showing that they received excitatory input from one ear, and that stimulation of the other ear was without apparent effect. On the other hand, some monaural cells in AI were insensitive to the azimuth of noise bursts, showing that sensitivity to monaural directional cues is not a property of all monaural cells in AI. 6. MD-EI cells (70%, 19/27) exhibited an increase in responsiveness on the side of the plugged ear, showing that they received excitatory drive from one ear and inhibitory drive from the other. MD-EI cells remained azimuth sensitive with the inhibitory ear plugged, showing that they were sensitive to monaural directional cues at the excitatory ear.(ABSTRACT TRUNCATED AT 400 WORDS)


2011 ◽  
Vol 106 (2) ◽  
pp. 1016-1027 ◽  
Author(s):  
Martin Pienkowski ◽  
Jos J. Eggermont

The distribution of neuronal characteristic frequencies over the area of primary auditory cortex (AI) roughly reflects the tonotopic organization of the cochlea. However, because the area of AI activated by any given sound frequency increases erratically with sound level, it has generally been proposed that frequency is represented in AI not with a rate-place code but with some more complex, distributed code. Here, on the basis of both spike and local field potential (LFP) recordings in the anesthetized cat, we show that the tonotopic representation in AI is much more level tolerant when mapped with spectrotemporally dense tone pip ensembles rather than with individually presented tone pips. That is, we show that the tuning properties of individual unit and LFP responses are less variable with sound level under dense compared with sparse stimulation, and that the spatial frequency resolution achieved by the AI neural population at moderate stimulus levels (65 dB SPL) is better with densely than with sparsely presented sounds. This implies that nonlinear processing in the central auditory system can compensate (in part) for the level-dependent coding of sound frequency in the cochlea, and suggests that there may be a functional role for the cortical tonotopic map in the representation of complex sounds.


Sign in / Sign up

Export Citation Format

Share Document