Sound frequency representation in primary auditory cortex is level tolerant for moderately loud, complex sounds

2011 ◽  
Vol 106 (2) ◽  
pp. 1016-1027 ◽  
Author(s):  
Martin Pienkowski ◽  
Jos J. Eggermont

The distribution of neuronal characteristic frequencies over the area of primary auditory cortex (AI) roughly reflects the tonotopic organization of the cochlea. However, because the area of AI activated by any given sound frequency increases erratically with sound level, it has generally been proposed that frequency is represented in AI not with a rate-place code but with some more complex, distributed code. Here, on the basis of both spike and local field potential (LFP) recordings in the anesthetized cat, we show that the tonotopic representation in AI is much more level tolerant when mapped with spectrotemporally dense tone pip ensembles rather than with individually presented tone pips. That is, we show that the tuning properties of individual unit and LFP responses are less variable with sound level under dense compared with sparse stimulation, and that the spatial frequency resolution achieved by the AI neural population at moderate stimulus levels (65 dB SPL) is better with densely than with sparsely presented sounds. This implies that nonlinear processing in the central auditory system can compensate (in part) for the level-dependent coding of sound frequency in the cochlea, and suggests that there may be a functional role for the cortical tonotopic map in the representation of complex sounds.

2009 ◽  
Vol 102 (6) ◽  
pp. 3329-3339 ◽  
Author(s):  
Nima Mesgarani ◽  
Stephen V. David ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

Population responses of cortical neurons encode considerable details about sensory stimuli, and the encoded information is likely to change with stimulus context and behavioral conditions. The details of encoding are difficult to discern across large sets of single neuron data because of the complexity of naturally occurring stimulus features and cortical receptive fields. To overcome this problem, we used the method of stimulus reconstruction to study how complex sounds are encoded in primary auditory cortex (AI). This method uses a linear spectro-temporal model to map neural population responses to an estimate of the stimulus spectrogram, thereby enabling a direct comparison between the original stimulus and its reconstruction. By assessing the fidelity of such reconstructions from responses to modulated noise stimuli, we estimated the range over which AI neurons can faithfully encode spectro-temporal features. For stimuli containing statistical regularities (typical of those found in complex natural sounds), we found that knowledge of these regularities substantially improves reconstruction accuracy over reconstructions that do not take advantage of this prior knowledge. Finally, contrasting stimulus reconstructions under different behavioral states showed a novel view of the rapid changes in spectro-temporal response properties induced by attentional and motivational state.


Author(s):  
Israel Nelken

Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds. Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.


2015 ◽  
Vol 112 (52) ◽  
pp. 16036-16041 ◽  
Author(s):  
Federico De Martino ◽  
Michelle Moerel ◽  
Kamil Ugurbil ◽  
Rainer Goebel ◽  
Essa Yacoub ◽  
...  

Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that—in this highly columnar cortex—task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.


NeuroImage ◽  
2005 ◽  
Vol 28 (1) ◽  
pp. 49-58 ◽  
Author(s):  
Christoph Mulert ◽  
Lorenz Jäger ◽  
Sebastian Propp ◽  
Susanne Karch ◽  
Sylvère Störmann ◽  
...  

2011 ◽  
Vol 106 (2) ◽  
pp. 849-859 ◽  
Author(s):  
Edward L. Bartlett ◽  
Srivatsun Sadagopan ◽  
Xiaoqin Wang

The frequency resolution of neurons throughout the ascending auditory pathway is important for understanding how sounds are processed. In many animal studies, the frequency tuning widths are about 1/5th octave wide in auditory nerve fibers and much wider in auditory cortex neurons. Psychophysical studies show that humans are capable of discriminating far finer frequency differences. A recent study suggested that this is perhaps attributable to fine frequency tuning of neurons in human auditory cortex (Bitterman Y, Mukamel R, Malach R, Fried I, Nelken I. Nature 451: 197–201, 2008). We investigated whether such fine frequency tuning was restricted to human auditory cortex by examining the frequency tuning width in the awake common marmoset monkey. We show that 27% of neurons in the primary auditory cortex exhibit frequency tuning that is finer than the typical frequency tuning of the auditory nerve and substantially finer than previously reported cortical data obtained from anesthetized animals. Fine frequency tuning is also present in 76% of neurons of the auditory thalamus in awake marmosets. Frequency tuning was narrower during the sustained response compared to the onset response in auditory cortex neurons but not in thalamic neurons, suggesting that thalamocortical or intracortical dynamics shape time-dependent frequency tuning in cortex. These findings challenge the notion that the fine frequency tuning of auditory cortex is unique to human auditory cortex and that it is a de novo cortical property, suggesting that the broader tuning observed in previous animal studies may arise from the use of anesthesia during physiological recordings or from species differences.


2018 ◽  
Vol 29 (7) ◽  
pp. 2998-3009 ◽  
Author(s):  
Haifu Li ◽  
Feixue Liang ◽  
Wen Zhong ◽  
Linqing Yan ◽  
Lucas Mesik ◽  
...  

Abstract Spatial size tuning in the visual cortex has been considered as an important neuronal functional property for sensory perception. However, an analogous mechanism in the auditory system has remained controversial. In the present study, cell-attached recordings in the primary auditory cortex (A1) of awake mice revealed that excitatory neurons can be categorized into three types according to their bandwidth tuning profiles in response to band-passed noise (BPN) stimuli: nonmonotonic (NM), flat, and monotonic, with the latter two considered as non-tuned for bandwidth. The prevalence of bandwidth-tuned (i.e., NM) neurons increases significantly from layer 4 to layer 2/3. With sequential cell-attached and whole-cell voltage-clamp recordings from the same neurons, we found that the bandwidth preference of excitatory neurons is largely determined by the excitatory synaptic input they receive, and that the bandwidth selectivity is further enhanced by flatly tuned inhibition observed in all cells. The latter can be attributed at least partially to the flat tuning of parvalbumin inhibitory neurons. The tuning of auditory cortical neurons for bandwidth of BPN may contribute to the processing of complex sounds.


2004 ◽  
Vol 91 (1) ◽  
pp. 118-135 ◽  
Author(s):  
Kyle T. Nakamoto ◽  
Jiping Zhang ◽  
Leonard M. Kitzes

The topographical response of a portion of an isofrequency contour in primary cat auditory cortex (AI) to a series of monaural and binaural stimuli was studied. Responses of single neurons to monaural and a matrix of binaural characteristic frequency tones, varying in average binaural level (ABL) and interaural level differences (ILD), were recorded. The topography of responses to monaural and binaural stimuli was appreciably different. Patches of cells that responded monotonically to increments in ABL alternated with patches that responded nonmonotonically to ABL. The patches were between 0.4 and 1 mm in length along an isofrequency contour. Differences were found among monotonic patches and among nonmonotonic patches. Topographically, activated and silent populations of neurons varied with both changes in ILD and changes in ABL, suggesting that the area of responsive units may underlie the coding of sound level and sound location.


2006 ◽  
Vol 96 (3) ◽  
pp. 1105-1115 ◽  
Author(s):  
Yonatan I. Fishman ◽  
Mitchell Steinschneider

An important function of the auditory nervous system is to analyze the frequency content of environmental sounds. The neural structures involved in determining psychophysical frequency resolution remain unclear. Using a two-noise masking paradigm, the present study investigates the spectral resolution of neural populations in primary auditory cortex (A1) of awake macaques and the degree to which it matches psychophysical frequency resolution. Neural ensemble responses (auditory evoked potentials, multiunit activity, and current source density) evoked by a pulsed 60-dB SPL pure-tone signal fixed at the best frequency (BF) of the recorded neural populations were examined as a function of the frequency separation (ΔF) between the tone and two symmetrically flanking continuous 80-dB SPL, 50-Hz-wide bands of noise. ΔFs ranged from 0 to 50% of the BF, encompassing the range typically examined in psychoacoustic experiments. Responses to the signal were minimal for ΔF = 0% and progressively increased with ΔF, reaching a maximum at ΔF = 50%. Rounded exponential functions, used to model auditory filter shapes in psychoacoustic studies of frequency resolution, provided excellent fits to neural masking functions. Goodness-of-fit was greatest for response components in lamina 4 and lower lamina 3 and least for components recorded in more superficial cortical laminae. Physiological equivalent rectangular bandwidths (ERBs) increased with BF, measuring nearly 15% of the BF. These findings parallel results of psychoacoustic studies in both monkeys and humans, and thus indicate that a representation of perceptual frequency resolution is available at the level of A1.


2012 ◽  
Vol 24 (9) ◽  
pp. 1896-1907 ◽  
Author(s):  
I-Hui Hsieh ◽  
Paul Fillmore ◽  
Feng Rong ◽  
Gregory Hickok ◽  
Kourosh Saberi

Frequency modulation (FM) is an acoustic feature of nearly all complex sounds. Directional FM sweeps are especially pervasive in speech, music, animal vocalizations, and other natural sounds. Although the existence of FM-selective cells in the auditory cortex of animals has been documented, evidence in humans remains equivocal. Here we used multivariate pattern analysis to identify cortical selectivity for direction of a multitone FM sweep. This method distinguishes one pattern of neural activity from another within the same ROI, even when overall level of activity is similar, allowing for direct identification of FM-specialized networks. Standard contrast analysis showed that despite robust activity in auditory cortex, no clusters of activity were associated with up versus down sweeps. Multivariate pattern analysis classification, however, identified two brain regions as selective for FM direction, the right primary auditory cortex on the supratemporal plane and the left anterior region of the superior temporal gyrus. These findings are the first to directly demonstrate existence of FM direction selectivity in the human auditory cortex.


Sign in / Sign up

Export Citation Format

Share Document