Complexity and temporal dynamics of frequency coding in the awake rat auditory cortex

2003 ◽  
Vol 18 (9) ◽  
pp. 2638-2652 ◽  
Author(s):  
Bernhard H. Gaese ◽  
Joachim Ostwald
Neuroscience ◽  
2021 ◽  
Vol 455 ◽  
pp. 79-88
Author(s):  
Pei-Run Song ◽  
Yu-Ying Zhai ◽  
Yu-Mei Gong ◽  
Xin-Yu Du ◽  
Jie He ◽  
...  
Keyword(s):  

2020 ◽  
Author(s):  
Jean-Pierre R. Falet ◽  
Jonathan Côté ◽  
Veronica Tarka ◽  
Zaida-Escila Martinez-Moreno ◽  
Patrice Voss ◽  
...  

AbstractWe present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.


2016 ◽  
Author(s):  
Liberty S. Hamilton ◽  
Erik Edwards ◽  
Edward F. Chang

AbstractTo derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals, including phonetic and prosodic cues. Equally important is the detection of acoustic cues that give structure and context to the information we hear, such as sentence boundaries. How the brain organizes this information processing is unknown. Here, using data-driven computational methods on an extensive set of high-density intracranial recordings, we reveal a large-scale partitioning of the entire human speech cortex into two spatially distinct regions that detect important cues for parsing natural speech. These caudal (Zone 1) and rostral (Zone 2) regions work in parallel to detect onsets and prosodic information, respectively, within naturally spoken sentences. In contrast, local processing within each region supports phonetic feature encoding. These findings demonstrate a fundamental organizational property of the human auditory cortex that has been previously unrecognized.


2016 ◽  
Vol 45 ◽  
pp. 10-22 ◽  
Author(s):  
Björn Herrmann ◽  
Molly J. Henry ◽  
Ingrid S. Johnsrude ◽  
Jonas Obleser

2014 ◽  
Vol 111 (11) ◽  
pp. 2244-2263 ◽  
Author(s):  
Brian J. Malone ◽  
Brian H. Scott ◽  
Malcolm N. Semple

Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing.


Neuroscience ◽  
1993 ◽  
Vol 56 (1) ◽  
pp. 61-74 ◽  
Author(s):  
B. Hars ◽  
C. Maho ◽  
J.-M. Edeline ◽  
E. Hennevin

2019 ◽  
Author(s):  
Ruiye Ni ◽  
David A. Bender ◽  
Dennis L. Barbour

AbstractThe ability to process speech signals under challenging listening environments is critical for speech perception. Great efforts have been made to reveal the underlying single unit encoding mechanism. However, big variability is usually discovered in single-unit responses, and the population coding mechanism is yet to be revealed. In this study, we are aimed to study how a population of neurons encodes behaviorally relevant signals subjective to change in intensity and signal-noise-ratio (SNR). We recorded single-unit activity from the primary auditory cortex of awake common marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise (WGN) and vocalization babble (Babble). By pooling all single units together, the pseudo-population analysis showed the population neural responses track intra- and inter-trajectory angle evolutions track vocalization identity and intensity/SNR, respectively. The ability of the trajectory to track the vocalizations attribute was degraded to a different degree by different noises. Discrimination of neural populations evaluated by neural response classifiers revealed that a finer optimal temporal resolution and longer time scale of temporal dynamics were needed for vocalizations in noise than vocalizations at multiple different intensities. The ability of population responses to discriminate between different vocalizations were mostly retained above the detection threshold.Significance StatementHow our brain excels in the challenge of precise acoustic signal encoding against noisy environment is of great interest for scientists. Relatively few studies have strived to tackle this mystery from the perspective of neural population responses. Population analysis reveals the underlying neural encoding mechanism of complex acoustic stimuli based upon a pool of single units via vector coding. We suggest the spatial population response vectors as one important way for neurons to integrate multiple attributes of natural acoustic signals, specifically, marmots’ vocalizations.


Sign in / Sign up

Export Citation Format

Share Document