scholarly journals Influence of Context and Behavior on Stimulus Reconstruction From Neural Activity in Primary Auditory Cortex

2009 ◽  
Vol 102 (6) ◽  
pp. 3329-3339 ◽  
Author(s):  
Nima Mesgarani ◽  
Stephen V. David ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

Population responses of cortical neurons encode considerable details about sensory stimuli, and the encoded information is likely to change with stimulus context and behavioral conditions. The details of encoding are difficult to discern across large sets of single neuron data because of the complexity of naturally occurring stimulus features and cortical receptive fields. To overcome this problem, we used the method of stimulus reconstruction to study how complex sounds are encoded in primary auditory cortex (AI). This method uses a linear spectro-temporal model to map neural population responses to an estimate of the stimulus spectrogram, thereby enabling a direct comparison between the original stimulus and its reconstruction. By assessing the fidelity of such reconstructions from responses to modulated noise stimuli, we estimated the range over which AI neurons can faithfully encode spectro-temporal features. For stimuli containing statistical regularities (typical of those found in complex natural sounds), we found that knowledge of these regularities substantially improves reconstruction accuracy over reconstructions that do not take advantage of this prior knowledge. Finally, contrasting stimulus reconstructions under different behavioral states showed a novel view of the rapid changes in spectro-temporal response properties induced by attentional and motivational state.

2008 ◽  
Vol 99 (4) ◽  
pp. 1616-1627 ◽  
Author(s):  
Ben Scholl ◽  
Xiang Gao ◽  
Michael Wehr

Responses of cortical neurons to sensory stimuli within their receptive fields can be profoundly altered by the stimulus context. In visual and somatosensory cortex, contextual interactions have been shown to change sign from facilitation to suppression depending on stimulus strength. Contextual modulation of high-contrast stimuli tends to be suppressive, but for low-contrast stimuli tends to be facilitative. This trade-off may optimize contextual integration by cortical cells and has been suggested to be a general feature of cortical processing, but it remains unknown whether a similar phenomenon occurs in auditory cortex. Here we used whole cell and single-unit recordings to investigate how contextual interactions in auditory cortical neurons depend on the relative intensity of masker and probe stimuli in a two-tone stimulus paradigm. We tested the hypothesis that relatively low-level probes should show facilitation, whereas relatively high-level probes should show suppression. We found that contextual interactions were primarily suppressive across all probe levels, and that relatively low-level probes were subject to stronger suppression than high-level probes. These results were virtually identical for spiking and subthreshold responses. This suggests that, unlike visual cortical neurons, auditory cortical neurons show maximal suppression rather than facilitation for relatively weak stimuli.


Author(s):  
Israel Nelken

Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds. Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.


2011 ◽  
Vol 106 (2) ◽  
pp. 1016-1027 ◽  
Author(s):  
Martin Pienkowski ◽  
Jos J. Eggermont

The distribution of neuronal characteristic frequencies over the area of primary auditory cortex (AI) roughly reflects the tonotopic organization of the cochlea. However, because the area of AI activated by any given sound frequency increases erratically with sound level, it has generally been proposed that frequency is represented in AI not with a rate-place code but with some more complex, distributed code. Here, on the basis of both spike and local field potential (LFP) recordings in the anesthetized cat, we show that the tonotopic representation in AI is much more level tolerant when mapped with spectrotemporally dense tone pip ensembles rather than with individually presented tone pips. That is, we show that the tuning properties of individual unit and LFP responses are less variable with sound level under dense compared with sparse stimulation, and that the spatial frequency resolution achieved by the AI neural population at moderate stimulus levels (65 dB SPL) is better with densely than with sparsely presented sounds. This implies that nonlinear processing in the central auditory system can compensate (in part) for the level-dependent coding of sound frequency in the cochlea, and suggests that there may be a functional role for the cortical tonotopic map in the representation of complex sounds.


2018 ◽  
Vol 29 (7) ◽  
pp. 2998-3009 ◽  
Author(s):  
Haifu Li ◽  
Feixue Liang ◽  
Wen Zhong ◽  
Linqing Yan ◽  
Lucas Mesik ◽  
...  

Abstract Spatial size tuning in the visual cortex has been considered as an important neuronal functional property for sensory perception. However, an analogous mechanism in the auditory system has remained controversial. In the present study, cell-attached recordings in the primary auditory cortex (A1) of awake mice revealed that excitatory neurons can be categorized into three types according to their bandwidth tuning profiles in response to band-passed noise (BPN) stimuli: nonmonotonic (NM), flat, and monotonic, with the latter two considered as non-tuned for bandwidth. The prevalence of bandwidth-tuned (i.e., NM) neurons increases significantly from layer 4 to layer 2/3. With sequential cell-attached and whole-cell voltage-clamp recordings from the same neurons, we found that the bandwidth preference of excitatory neurons is largely determined by the excitatory synaptic input they receive, and that the bandwidth selectivity is further enhanced by flatly tuned inhibition observed in all cells. The latter can be attributed at least partially to the flat tuning of parvalbumin inhibitory neurons. The tuning of auditory cortical neurons for bandwidth of BPN may contribute to the processing of complex sounds.


2019 ◽  
Author(s):  
Ruiye Ni ◽  
David A. Bender ◽  
Dennis L. Barbour

AbstractThe ability to process speech signals under challenging listening environments is critical for speech perception. Great efforts have been made to reveal the underlying single unit encoding mechanism. However, big variability is usually discovered in single-unit responses, and the population coding mechanism is yet to be revealed. In this study, we are aimed to study how a population of neurons encodes behaviorally relevant signals subjective to change in intensity and signal-noise-ratio (SNR). We recorded single-unit activity from the primary auditory cortex of awake common marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise (WGN) and vocalization babble (Babble). By pooling all single units together, the pseudo-population analysis showed the population neural responses track intra- and inter-trajectory angle evolutions track vocalization identity and intensity/SNR, respectively. The ability of the trajectory to track the vocalizations attribute was degraded to a different degree by different noises. Discrimination of neural populations evaluated by neural response classifiers revealed that a finer optimal temporal resolution and longer time scale of temporal dynamics were needed for vocalizations in noise than vocalizations at multiple different intensities. The ability of population responses to discriminate between different vocalizations were mostly retained above the detection threshold.Significance StatementHow our brain excels in the challenge of precise acoustic signal encoding against noisy environment is of great interest for scientists. Relatively few studies have strived to tackle this mystery from the perspective of neural population responses. Population analysis reveals the underlying neural encoding mechanism of complex acoustic stimuli based upon a pool of single units via vector coding. We suggest the spatial population response vectors as one important way for neurons to integrate multiple attributes of natural acoustic signals, specifically, marmots’ vocalizations.


Author(s):  
Joshua D Downer ◽  
James Bigelow ◽  
Melissa Runfeldt ◽  
Brian James Malone

Fluctuations in the amplitude envelope of complex sounds provide critical cues for hearing, particularly for speech and animal vocalizations. Responses to amplitude modulation (AM) in the ascending auditory pathway have chiefly been described for single neurons. How neural populations might collectively encode and represent information about AM remains poorly characterized, even in primary auditory cortex (A1). We modeled population responses to AM based on data recorded from A1 neurons in awake squirrel monkeys and evaluated how accurately single trial responses to modulation frequencies from 4 to 512 Hz could be decoded as functions of population size, composition, and correlation structure. We found that a population-based decoding model that simulated convergent, equally weighted inputs was highly accurate and remarkably robust to the inclusion of neurons that were individually poor decoders. By contrast, average rate codes based on convergence performed poorly; effective decoding using average rates was only possible when the responses of individual neurons were segregated, as in classical population decoding models using labeled lines. The relative effectiveness of dynamic rate coding in auditory cortex was explained by shared modulation phase preferences among cortical neurons, despite heterogeneity in rate-based modulation frequency tuning. Our results indicate significant population-based synchrony in primary auditory cortex and suggest that robust population coding of the sound envelope information present in animal vocalizations and speech can be reliably achieved even with indiscriminate pooling of cortical responses. These findings highlight the importance of firing rate dynamics in population-based sensory coding.


2006 ◽  
Vol 96 (1) ◽  
pp. 252-258 ◽  
Author(s):  
Rajiv Narayan ◽  
Gilberto Graña ◽  
Kamal Sen

Understanding how single cortical neurons discriminate between sensory stimuli is fundamental to providing a link between cortical neural responses and perception. The discrimination of sensory stimuli by cortical neurons has been intensively investigated in the visual and somatosensory systems. However, relatively little is known about discrimination of sounds by auditory cortical neurons. Auditory cortex plays a particularly important role in the discrimination of complex sounds, e.g., vocal communication sounds. The rich dynamic structure of such complex sounds on multiple time scales motivates two questions regarding cortical discrimination. How does discrimination depend on the temporal resolution of the cortical response? How does discrimination accuracy evolve over time? Here we investigate these questions in field L, the analogue of primary auditory cortex in zebra finches, analyzing temporal resolution and temporal integration in the discrimination of conspecific songs (songs of the bird's own species) for both anesthetized and awake subjects. We demonstrate the existence of distinct time scales for temporal resolution and temporal integration and explain how they arise from cortical neural responses to complex dynamic sounds.


2006 ◽  
Vol 96 (6) ◽  
pp. 2972-2983 ◽  
Author(s):  
Gabriel Soto ◽  
Nancy Kopell ◽  
Kamal Sen

Two fundamental issues in auditory cortical processing are the relative importance of thalamocortical versus intracortical circuits in shaping response properties in primary auditory cortex (ACx), and how the effects of neuromodulators on these circuits affect dynamic changes in network and receptive field properties that enhance signal processing and adaptive behavior. To investigate these issues, we developed a computational model of layers III and IV (LIII/IV) of AI, constrained by anatomical and physiological data. We focus on how the local and global cortical architecture shape receptive fields (RFs) of cortical cells and on how different well-established cholinergic effects on the cortical network reshape frequency-tuning properties of cells in ACx. We identify key thalamocortical and intracortical circuits that strongly affect tuning curves of model cortical neurons and are also sensitive to cholinergic modulation. We then study how differential cholinergic modulation of network parameters change the tuning properties of our model cells and propose two different mechanisms: one intracortical (involving muscarinic receptors) and one thalamocortical (involving nicotinic receptors), which may be involved in rapid plasticity in ACx, as recently reported in a study by Fritz and coworkers.


2000 ◽  
Vol 84 (3) ◽  
pp. 1453-1463 ◽  
Author(s):  
Jos J. Eggermont

Responses of single- and multi-units in primary auditory cortex were recorded for gap-in-noise stimuli for different durations of the leading noise burst. Both firing rate and inter-spike interval representations were evaluated. The minimum detectable gap decreased in exponential fashion with the duration of the leading burst to reach an asymptote for durations of 100 ms. Despite the fact that leading and trailing noise bursts had the same frequency content, the dependence on leading burst duration was correlated with psychophysical estimates of across frequency channel (different frequency content of leading and trailing burst) gap thresholds in humans. The duration of the leading burst plus that of the gap was represented in the all-order inter-spike interval histograms for cortical neurons. The recovery functions for cortical neurons could be modeled on basis of fast synaptic depression and after-hyperpolarization produced by the onset response to the leading noise burst. This suggests that the minimum gap representation in the firing pattern of neurons in primary auditory cortex, and minimum gap detection in behavioral tasks is largely determined by properties intrinsic to those, or potentially subcortical, cells.


Sign in / Sign up

Export Citation Format

Share Document