Population activity in auditory cortex of the awake rat revealed by recording with dense microelectrode array

Author(s):  
Takahiro Noda ◽  
Ryohei Kanzaki ◽  
Hirokazu Takahashi
Neuroscience ◽  
2021 ◽  
Vol 455 ◽  
pp. 79-88
Author(s):  
Pei-Run Song ◽  
Yu-Ying Zhai ◽  
Yu-Mei Gong ◽  
Xin-Yu Du ◽  
Jie He ◽  
...  
Keyword(s):  

Neuroscience ◽  
1993 ◽  
Vol 56 (1) ◽  
pp. 61-74 ◽  
Author(s):  
B. Hars ◽  
C. Maho ◽  
J.-M. Edeline ◽  
E. Hennevin

2019 ◽  
Author(s):  
Zac Bowen ◽  
Daniel E. Winkowski ◽  
Saurav Seshadri ◽  
Dietmar Plenz ◽  
Patrick O. Kanold

AbstractThe primary auditory cortex processes acoustic sequences for the perception of behaviorally meaningful sounds such as speech. Sound information arrives at its input layer 4 from where activity propagates to associative layer 2/3. It is currently not known whether there is a particular organization of neuronal population activity that is stable across layers and sound levels during sound processing. We used in vivo 2-photon imaging of pyramidal neurons in cortical layers L4 and L2/3 of mouse A1 to characterize the populations of neurons that were active spontaneously, i.e. in the absence of a sound stimulus, and those recruited by single-frequency tonal stimuli at different sound levels. Single-frequency sounds recruited neurons of widely ranging frequency selectivity in both layers. We defined neural ensembles as neurons being active within or during successive temporal windows at the temporal resolution of our imaging. For both layers, neuronal ensembles were highly variable in size during spontaneous activity as well as during sound presentation. Ensemble sizes distributed according to power laws, the hallmark of neuronal avalanches, and were similar across sound levels. Avalanches activated by sound were composed of neurons with diverse tuning preference, yet with selectivity independent of avalanche size. Thus, spontaneous and evoked activity in both L4 and L2/3 of A1 are composed of neuronal avalanches with similar power law relationships. Our results demonstrate network principles linked to maximal dynamic range, optimal information transfer and matching complexity between L4 and L2/3 to shape population activity in auditory cortex.


2018 ◽  
Author(s):  
Dmitry Kobak ◽  
Jose L. Pardo-Vazquez ◽  
Mafalda Valente ◽  
Christian Machens ◽  
Alfonso Renart

AbstractThe accuracy of the neural code depends on the relative embedding of signal and noise in the activity of neural populations. Despite a wealth of theoretical work on population codes, there are few empirical characterisations of the high-dimensional signal and noise subspaces. We studied the geometry of population codes in the rat auditory cortex across brain states along the activation-inactivation continuum, using sounds varying in difference and mean level across the ears. As the cortex becomes more activated, single-hemisphere populations go from preferring contralateral loud sounds to a symmetric preference across lateralisations and intensities, gain-modulation effectively disappears, and the signal and noise subspaces become approximately orthogonal to each other and to the direction corresponding to global activity modulations. Level-invariant decoding of sound lateralisation also becomes possible in the active state. Our results provide an empirical foundation for the geometry and state-dependence of cortical population codes.


2019 ◽  
Author(s):  
Christopher Heelan ◽  
Jihun Lee ◽  
Ronan O’Shea ◽  
David M. Brandman ◽  
Wilson Truccolo ◽  
...  

AbstractDirect electronic communication with sensory areas of the neocortex is a challenging ambition for brain-computer interfaces. Here, we report the first successful neural decoding of English words with high intelligibility from intracortical spike-based neural population activity recorded from the secondary auditory cortex of macaques. We acquired 96-channel full-broadband population recordings using intracortical microelectrode arrays in the rostral and caudal parabelt regions of the superior temporal gyrus (STG). We leveraged a new neural processing toolkit to investigate the choice of decoding algorithm, neural preprocessing, audio representation, channel count, and array location on neural decoding performance. The results illuminated a view of the auditory cortex as a spatially distributed network and a general purpose processor of complex sounds. The presented spike-based machine learning neural decoding approach may further be useful in informing future encoding strategies to deliver direct auditory percepts to the brain as specific patterns of microstimulation.


Sign in / Sign up

Export Citation Format

Share Document