Variability and information content in auditory cortex spike trains during an interval-discrimination task

2013 ◽  
Vol 110 (9) ◽  
pp. 2163-2174 ◽  
Author(s):  
Juan M. Abolafia ◽  
M. Martinez-Garcia ◽  
G. Deco ◽  
M. V. Sanchez-Vives

Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task.

2021 ◽  
Author(s):  
Sudha Sharma ◽  
Hemant Kumar Srivastava ◽  
Sharba Bandyopadhyay

AbstractSo far, our understanding on the role of the auditory cortex (ACX) in processing visual information has been limited to infragranular layers of the ACX, which have been shown to respond to visual stimulation. Here, we investigate the neurons in supragranular layers of the mouse ACX using 2-photon calcium imaging. Contrary to previous reports, here we show that more than 20% of responding neurons in layer2/3 of the ACX respond to full-field visual stimulation. These responses occur by both excitation and hyperpolarization. The primary ACX (A1) has a greater proportion of visual responses by hyperpolarization compared to excitation likely driven by inhibitory neurons of the infragranular layers of the ACX rather than local layer 2/3 inhibitory neurons. Further, we found that more than 60% of neurons in the layer 2/3 of A1 are multisensory in nature. We also show the presence of multisensory neurons in close proximity to exclusive auditory neurons and that there is a reduction in the noise correlations of the recorded neurons during multisensory presentation. This is evidence in favour of deep and intricate visual influence over auditory processing. The results have strong implications for decoding visual influences over the early auditory cortical regions.Significance statementTo understand, what features of our visual world are processed in the auditory cortex (ACX), understanding response properties of auditory cortical neurons to visual stimuli is important. Here, we show the presence of visual and multisensory responses in the supragranular layers of the ACX. Hyperpolarization to visual stimulation is more commonly observed in the primary ACX. Multisensory stimulation results in suppression of responses compared to unisensory stimulation and an overall decrease in noise correlation in the primary ACX. The close-knit architecture of these neurons with auditory specific neurons suggests the influence of non-auditory stimuli on the auditory processing.


2019 ◽  
Author(s):  
Fabiano Baroni ◽  
Benjamin Morillon ◽  
Agnès Trébuchon ◽  
Catherine Liégeois-Chauvel ◽  
Itsaso Olasagasti ◽  
...  

AbstractNeural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model.Author summaryLike most animal vocalizations, speech results from a pseudo-rhythmic process that reflects the convergence of motor and auditory neural substrates and the natural resonance properties of the vocal apparatus towards efficient communication. Here, we leverage the excellent temporal and spatial resolution of intracranial EEG to demonstrate that neural activity in human early auditory cortical areas during speech perception exhibits a dual-scale spectral profile of power changes, with speech increasing power in low (delta-theta) and high (gamma - high-gamma) frequency ranges, while decreasing power in intermediate (alpha-beta) frequencies. Single-trial multivariate decoding also resulted in a bimodal spectral profile of information content, with better decoding at low and high frequencies than at intermediate ones. From both spectral and informational perspectives, these patterns are consistent with the activity of a relatively simple computational model comprising two reciprocally connected excitatory/inhibitory sub-networks operating at different (low and high) timescales. By combining experimental, decoding and modeling approaches, we provide consistent evidence for the existence, information coding value and underlying neuronal architecture of dual timescale processing in human auditory cortex.


2009 ◽  
Vol 102 (5) ◽  
pp. 2638-2656 ◽  
Author(s):  
Hiroki Asari ◽  
Anthony M. Zador

Acoustic processing requires integration over time. We have used in vivo intracellular recording to measure neuronal integration times in anesthetized rats. Using natural sounds and other stimuli, we found that synaptic inputs to auditory cortical neurons showed a rather long context dependence, up to ≥4 s (τ ∼ 1 s), even though sound-evoked excitatory and inhibitory conductances per se rarely lasted ≳100 ms. Thalamic neurons showed only a much faster form of adaptation with a decay constant τ <100 ms, indicating that the long-lasting form originated from presynaptic mechanisms in the cortex, such as synaptic depression. Restricting knowledge of the stimulus history to only a few hundred milliseconds reduced the predictable response component to about half that of the optimal infinite-history model. Our results demonstrate the importance of long-range temporal effects in auditory cortex and suggest a potential neural substrate for auditory processing that requires integration over timescales of seconds or longer, such as stream segregation.


2000 ◽  
Vol 12 (3) ◽  
pp. 449-460 ◽  
Author(s):  
G. Dehaene-Lambertz

Early cerebral specialization and lateralization for auditory processing in 4-month-old infants was studied by recording high-density evoked potentials to acoustical and phonetic changes in a series of repeated stimuli (either tones or syllables). Mismatch responses to these stimuli exhibit a distinct topography suggesting that different neural networks within the temporal lobe are involved in the perception and representation of the different features of an auditory stimulus. These data confirm that specialized modules are present within the auditory cortex very early in development. However, both for syllables and continuous tones, higher voltages were recorded over the left hemisphere than over the right with no significant interaction of hemisphere by type of stimuli. This suggests that there is no greater left hemisphere involvement in phonetic processing than in acoustic processing during the first months of life.


2005 ◽  
Vol 17 (10) ◽  
pp. 1519-1531 ◽  
Author(s):  
Kerstin Sander ◽  
Henning Scheich

Evidence suggests that in animals their own species-specific communication sounds are processed predominantly in the left hemisphere. In contrast, processing linguistic aspects of human speech involves the left hemisphere, whereas processing some prosodic aspects of speech as well as other not yet well-defined attributes of human voices predominantly involves the right hemisphere. This leaves open the question of hemispheric processing of universal (species-specific) human vocalizations that are more directly comparable to animal vocalizations. The present functional magnetic resonance imaging study addresses this question. Twenty subjects listened to human laughing and crying presented either in an original or time-reversed version while performing a pitch-shift detection task to control attention. Time-reversed presentation of these sounds is a suitable auditory control because it does not change the overall spectral content. The auditory cortex, amygdala, and insula in the left hemisphere were more strongly activated by original than by time-reversed laughing and crying. Thus, similar to speech, these nonspeech vocalizations involve predominantly left-hemisphere auditory processing. Functional data suggest that this lateralization effect is more likely based on acoustical similarities between speech and laughing or crying than on similarities with respect to communicative functions. Both the original and time-reversed laughing and crying activated more strongly the right insula, which may be compatible with its assumed function in emotional self-awareness.


1997 ◽  
Vol 78 (6) ◽  
pp. 3489-3492 ◽  
Author(s):  
Yunfeng Zhang ◽  
Nobuo Suga

Zhang, Yunfeng and Nobuo Suga. Corticofugal amplification of subcortical responses to single tone stimuli in the mustached bat. J. Neurophysiol. 78: 3489–3492, 1997. Since 1962, physiological data of corticofugal effects on subcortical auditory neurons have been controversial: inhibitory, excitatory, or both. An inhibitory effect has been much more frequently observed than an excitatory effect. Recent studies performed with an improved experimental design indicate that corticofugal system mediates a highly focused positive feedback to physiologically “matched” subcortical neurons, and widespread lateral inhibition to “unmatched” subcortical neurons, in order to adjust and improve information processing. These results lead to a question: what happens to subcortical auditory responses when the corticofugal system, including matched and unmatched cortical neurons, is functionally eliminated? We temporarily inactivated both matched and unmatched neurons in the primary auditory cortex of the mustached bat with muscimol (an agonist of inhibitory synaptic transmitter) and measured the effect of cortical inactivation on subcortical auditory responses. Cortical inactivation reduced auditory responses in the medial geniculate body and the inferior colliculus. This reduction was larger (60 vs. 34%) and faster (11 vs. 31 min) for thalamic neurons than for collicular neurons. Our data indicate that the corticofugal system amplifies collicular auditory responses by 1.5 times and thalamic responses by 2.5 times on average. The data are consistant with a scheme in which positive feedback from the auditory cortex is modulated by inhibition that may mostly take place in the cortex.


1998 ◽  
Vol 10 (4) ◽  
pp. 536-540 ◽  
Author(s):  
Pascal Belin ◽  
Monica Zilbovicius ◽  
Sophie Crozier ◽  
Lionel Thivard ◽  
and Anne Fontaine ◽  
...  

To investigate the role of temporal processing in language lateralization, we monitored asymmetry of cerebral activation in human volunteers using positron emission tomography (PET). Subjects were scanned during passive auditory stimulation with nonverbal sounds containing rapid (40 msec) or extended (200 msec) frequency transitions. Bilateral symmetric activation was observed in the auditory cortex for slow frequency transitions. In contrast, left-biased asymmetry was observed in response to rapid frequency transitions due to reduced response of the right auditory cortex. These results provide direct evidence that auditory processing of rapid acoustic transitions is lateralized in the human brain. Such functional asymmetry in temporal processing is likely to contribute to language lateralization from the lowest levels of cortical processing.


2020 ◽  
Author(s):  
Sara Momtaz ◽  
Deborah W. Moncrieff ◽  
Gavin M. Bidelman

ABSTRACTChildren diagnosed with auditory processing disorder (APD) show deficits in processing complex sounds that are associated with difficulties in higher-order language, learning, cognitive, and communicative functions. Amblyaudia (AMB) is a subcategory of APD characterized by abnormally large ear asymmetries in dichotic listening tasks. Here, we examined frequency-specific neural oscillations and functional connectivity via high-density EEG in children with and without AMB during passive listening of nonspeech stimuli. Time-frequency maps of these “brain rhythms” revealed stronger phase-locked beta-gamma (∼35 Hz) oscillations in AMB participants within bilateral auditory cortex for sounds presented to the right ear, suggesting a hypersynchronization and imbalance of auditory neural activity. Brain-behavior correlations revealed neural asymmetries in cortical responses predicted the larger than normal right-ear advantage seen in participants with AMB. Additionally, we found weaker functional connectivity in the AMB group from right to left auditory cortex, despite their stronger neural responses overall. Our results reveal abnormally large auditory sensory encoding and an imbalance in communication between cerebral hemispheres (ipsi-to -contralateral signaling) in AMB. These neurophysiological changes might lead to the functionally poorer behavioral capacity to integrate information between the two ears in children with AMB.


2005 ◽  
Vol 94 (1) ◽  
pp. 83-104 ◽  
Author(s):  
Edward L. Bartlett ◽  
Xiaoqin Wang

A sound embedded in an acoustic stream cannot be unambiguously segmented and identified without reference to its stimulus context. To understand the role of stimulus context in cortical processing, we investigated the responses of auditory cortical neurons to 2-sound sequences in awake marmosets, with a focus on stimulus properties other than carrier frequency. Both suppressive and facilitatory modulations of cortical responses were observed by using combinations of modulated tone and noise stimuli. The main findings are as follows. 1) Preceding stimuli could suppress or facilitate responses to succeeding stimuli for durations >1 s. These long-lasting effects were dependent on the duration, sound level, and modulation parameters of the preceding stimulus, in addition to the carrier frequency. They occurred regardless of whether the 2 stimuli were separated by a silent interval. 2) Suppression was often tuned such that preceding stimuli whose parameters were similar to succeeding stimuli produced the strongest suppression. However, the responses of many units could be suppressed, although often weaker, even when the 2 stimuli were dissimilar. In some cases, only a dissimilar preceding stimulus produced suppression in the responses to the succeeding stimulus. 3) In contrast to suppression, facilitation of responses to succeeding stimuli by the preceding stimulus was usually strongest when the 2 stimuli were dissimilar. 4) There was no clear correlation between the firing rate evoked by the preceding stimulus and the change in the firing rate evoked by the succeeding stimulus, indicating that the observed suppression was not simply a result of habituation or spike adaptation. These results demonstrate that persistent modulations of the responses of an auditory cortical neuron to a given stimulus can be induced by preceding stimuli. Decreases or increases of responses to the succeeding stimuli are dependent on the spectral, temporal, and intensity properties of the preceding stimulus. This indicates that cortical auditory responses to a sound are not static, but instead depend on the stimulus context in a stimulus-specific manner. The long-lasting impact of stimulus context and the prevalence of facilitation suggest that such cortical response properties are important for auditory processing beyond forward masking, such as for auditory streaming and segregation.


2021 ◽  
Vol 31 (Supplement_2) ◽  
Author(s):  
Cláudia Reis ◽  
Margarida Teixeira

Abstract Background The objective of this study was to verify wether it was possible to observe greater plasticity of the auditory cortex and greater benefits in terms of auditory processing, better discrimination, attention and identification of rare stimuli, in musicians, verified through the performance of Long Latency Auditory Evoked Potential, P300, with and without competitive noise, in musicians compared to non-musician. Methods 20 individuals were divided into two groups: 8 in the musicians, and 12 in the control group. The P300 values were compared between the two groups and then between the results of the P300 with and without competitive noise, in both groups. Results When comparing the results without competitive noise, it appears that the average amplitude was higher in the group of musicians compared to the control group, in both ears. Latency was lower in the control group, only in the right ear. With competitive noise, in both groups, the average amplitude is lower, compared to the results of the P300 without competitive noise, both in the right ear and in the left ear, and this effect is more considerable in the group of musicians. Regarding latency, theaverage of the P300 with competitive noise, in both ears, with a greater increase in latency values, in the group of musicians. Conclusion Musicians show a greater cortical inhibition effect compared to non-musicians, demonstrating that the musician’s central auditory system shows greater activation, which can result in better performance in functions such as attention and discrimination, due to training by musical practice.


Sign in / Sign up

Export Citation Format

Share Document