scholarly journals GABAergic neural activity involved in salicylate-induced auditory cortex gain enhancement

Neuroscience ◽  
2011 ◽  
Vol 189 ◽  
pp. 187-198 ◽  
Author(s):  
J. Lu ◽  
E. Lobarinas ◽  
A. Deng ◽  
R. Goodey ◽  
D. Stolzberg ◽  
...  
2021 ◽  
Author(s):  
Anton Filipchuk ◽  
Alain Destexhe ◽  
Brice Bathellier

AbstractNeural activity in sensory cortex combines stimulus responses and ongoing activity, but it remains unclear whether they reflect the same underlying dynamics or separate processes. Here we show that during wakefulness, the neuronal assemblies evoked by sounds in the auditory cortex and thalamus are specific to the stimulus and distinct from the assemblies observed in ongoing activity. In contrast, during anesthesia, evoked assemblies are indistinguishable from ongoing assemblies in cortex, while they remain distinct in the thalamus. A strong remapping of sensory responses accompanies this dynamical state change produced by anesthesia. Together, these results show that the awake cortex engages dedicated neuronal assemblies in response to sensory inputs, which we suggest is a network correlate of sensory perception.One-Sentence SummarySensory responses in the awake cortex engage specific neuronal assemblies that disappear under anesthesia.


2019 ◽  
Author(s):  
Jesyin Lai ◽  
Stephen V. David

ABSTRACTChronic vagus nerve stimulation (VNS) can facilitate learning of sensory and motor behaviors. VNS is believed to trigger release of neuromodulators, including norepinephrine and acetylcholine, which can mediate cortical plasticity associated with learning. Most previous work has studied effects of VNS over many days, and less is known about how acute VNS influences neural coding and behavior over the shorter term. To explore this question, we measured effects of VNS on learning of an auditory discrimination over 1-2 days. Ferrets implanted with cuff electrodes on the vagus nerve were trained by classical conditioning on a tone frequency-reward association. One tone was associated with reward while another tone, was not. The frequencies and reward associations of the tones were changed every two days, requiring learning of a new relationship. When the tones (both rewarded and non-rewarded) were paired with VNS, rates of learning increased on the first day following a change in reward association. To examine VNS effects on auditory coding, we recorded single- and multi-unit neural activity in primary auditory cortex (A1) of passively listening animals following brief periods of VNS (20 trials/session) paired with tones. Because afferent VNS induces changes in pupil size associated with fluctuations in neuromodulation, we also measured pupil during recordings. After pairing VNS with a neuron’s best-frequency (BF) tone, responses in a subpopulation of neurons were reduced. Pairing with an off-BF tone or performing VNS during the inter-trial interval had no effect on responses. We separated the change in A1 activity into two components, one that could be predicted by fluctuations in pupil and one that persisted after VNS and was not accounted for by pupil. The BF-specific reduction in neural responses remained, even after regressing out changes that could be explained by pupil. In addition, the size of VNS-mediated changes in pupil predicted the magnitude of persistent changes in the neural response. This interaction suggests that changes in neuromodulation associated with arousal gate the long-term effects of VNS on neural activity. Taken together, these results support a role for VNS in auditory learning and help establish VNS as a tool to facilitate neural plasticity.


2002 ◽  
Vol 174 (1-2) ◽  
pp. 19-31 ◽  
Author(s):  
André Rupp ◽  
Stefan Uppenkamp ◽  
Alexander Gutschalk ◽  
Roland Beucker ◽  
Roy D Patterson ◽  
...  

Synapse ◽  
2013 ◽  
Vol 67 (8) ◽  
pp. 455-468 ◽  
Author(s):  
Hideki D. Kawai ◽  
Maggie La ◽  
Ho-An Kang ◽  
Yusuke Hashimoto ◽  
Kevin Liang ◽  
...  

2017 ◽  
Vol 657 ◽  
pp. 171-178 ◽  
Author(s):  
Min Young Lee ◽  
Doo Hee Kim ◽  
Su-Kyoung Park ◽  
Sang Beom Jun ◽  
Yena Lee ◽  
...  

2021 ◽  
Vol 13 ◽  
Author(s):  
Fuxin Ren ◽  
Wen Ma ◽  
Wei Zong ◽  
Ning Li ◽  
Xiao Li ◽  
...  

Presbycusis (PC) is characterized by preferential hearing loss at high frequencies and difficulty in speech recognition in noisy environments. Previous studies have linked PC to cognitive impairment, accelerated cognitive decline and incident Alzheimer’s disease. However, the neural mechanisms of cognitive impairment in patients with PC remain unclear. Although resting-state functional magnetic resonance imaging (rs-fMRI) studies have explored low-frequency oscillation (LFO) connectivity or amplitude of PC-related neural activity, it remains unclear whether the abnormalities occur within all frequency bands or within specific frequency bands. Fifty-one PC patients and fifty-one well-matched normal hearing controls participated in this study. The LFO amplitudes were investigated using the amplitude of low-frequency fluctuation (ALFF) at different frequency bands (slow-4 and slow-5). PC patients showed abnormal LFO amplitudes in the Heschl’s gyrus, dorsolateral prefrontal cortex (dlPFC), frontal eye field and key nodes of the speech network exclusively in slow-4, which suggested that abnormal spontaneous neural activity in PC was frequency dependent. Our findings also revealed that stronger functional connectivity between the dlPFC and the posterodorsal stream of auditory processing, as well as lower functional coupling between the PCC and key nodes of the DMN, which were associated with cognitive impairments in PC patients. Our study might underlie the cross-modal plasticity and higher-order cognitive participation of the auditory cortex after partial hearing deprivation. Our findings indicate that frequency-specific analysis of ALFF could provide valuable insights into functional alterations in the auditory cortex and non-auditory regions involved in cognitive impairment associated with PC.


2021 ◽  
Author(s):  
Drew Cappotto ◽  
HiJee Kang ◽  
Kongyan Li ◽  
Lucia Melloni ◽  
Jan Schnupp ◽  
...  

AbstractRecent studies have shown that stimulus history can be decoded via the use of broadband sensory impulses to reactivate mnemonic representations. It has also been shown that predictive mechanisms in the auditory system demonstrate similar tonotopic organization of neural activity as that elicited by the perceived stimuli. However, it remains unclear if the mnemonic and predictive information can be decoded from cortical activity simultaneously and from overlapping neural populations. Here, we recorded neural activity using electrocorticography (ECoG) in the auditory cortex of anesthetized rats while exposed to repeated stimulus sequences, where events within the sequence were occasionally replaced with a broadband noise burst or omitted entirely. We show that both stimulus history and predicted stimuli can be decoded from neural responses to broadband impulse at overlapping latencies but linked to largely independent neural populations. We also demonstrate that predictive representations are learned over the course of stimulation at two distinct time scales, reflected in two dissociable time windows of neural activity. These results establish a valuable tool for investigating the neural mechanisms of passive sequence learning, memory encoding, and prediction mechanisms within a single paradigm, and provide novel evidence for learning predictive representations even under anaesthesia.


2019 ◽  
Author(s):  
Fabiano Baroni ◽  
Benjamin Morillon ◽  
Agnès Trébuchon ◽  
Catherine Liégeois-Chauvel ◽  
Itsaso Olasagasti ◽  
...  

AbstractNeural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model.Author summaryLike most animal vocalizations, speech results from a pseudo-rhythmic process that reflects the convergence of motor and auditory neural substrates and the natural resonance properties of the vocal apparatus towards efficient communication. Here, we leverage the excellent temporal and spatial resolution of intracranial EEG to demonstrate that neural activity in human early auditory cortical areas during speech perception exhibits a dual-scale spectral profile of power changes, with speech increasing power in low (delta-theta) and high (gamma - high-gamma) frequency ranges, while decreasing power in intermediate (alpha-beta) frequencies. Single-trial multivariate decoding also resulted in a bimodal spectral profile of information content, with better decoding at low and high frequencies than at intermediate ones. From both spectral and informational perspectives, these patterns are consistent with the activity of a relatively simple computational model comprising two reciprocally connected excitatory/inhibitory sub-networks operating at different (low and high) timescales. By combining experimental, decoding and modeling approaches, we provide consistent evidence for the existence, information coding value and underlying neuronal architecture of dual timescale processing in human auditory cortex.


Sign in / Sign up

Export Citation Format

Share Document