scholarly journals Neural Coding of Periodicity in Marmoset Auditory Cortex

2010 ◽  
Vol 103 (4) ◽  
pp. 1809-1822 ◽  
Author(s):  
Daniel Bendor ◽  
Xiaoqin Wang

Pitch, our perception of how high or low a sound is on a musical scale, crucially depends on a sound's periodicity. If an acoustic signal is temporally jittered so that it becomes aperiodic, the pitch will no longer be perceivable even though other acoustical features that normally covary with pitch are unchanged. Previous electrophysiological studies investigating pitch have typically used only periodic acoustic stimuli, and as such these studies cannot distinguish between a neural representation of pitch and an acoustical feature that only correlates with pitch. In this report, we examine in the auditory cortex of awake marmoset monkeys ( Callithrix jacchus) the neural coding of a periodicity's repetition rate, an acoustic feature that covaries with pitch. We first examine if individual neurons show similar repetition rate tuning for different periodic acoustic signals. We next measure how sensitive these neural representations are to the temporal regularity of the acoustic signal. We find that neurons throughout auditory cortex covary their firing rate with the repetition rate of an acoustic signal. However, similar repetition rate tuning across acoustic stimuli and sensitivity to temporal regularity were generally only observed in a small group of neurons found near the anterolateral border of primary auditory cortex, the location of a previously identified putative pitch processing center. These results suggest that although the encoding of repetition rate is a general component of auditory cortical processing, the neural correlate of periodicity is confined to a special class of pitch-selective neurons within the putative pitch processing center of auditory cortex.

2002 ◽  
Vol 88 (3) ◽  
pp. 1433-1450 ◽  
Author(s):  
Michael P. Harms ◽  
Jennifer R. Melcher

Sound repetition rate plays an important role in stream segregation, temporal pattern recognition, and the perception of successive sounds as either distinct or fused. This study was aimed at elucidating the neural coding of repetition rate and its perceptual correlates. We investigated the representations of rate in the auditory pathway of human listeners using functional magnetic resonance imaging (fMRI), an indicator of population neural activity. Stimuli were trains of noise bursts presented at rates ranging from low (1–2/s; each burst is perceptually distinct) to high (35/s; individual bursts are not distinguishable). There was a systematic change in the form of fMRI response rate-dependencies from midbrain to thalamus to cortex. In the inferior colliculus, response amplitude increased with increasing rate while response waveshape remained unchanged and sustained. In the medial geniculate body, increasing rate produced an increase in amplitude and a moderate change in waveshape at higher rates (from sustained to one showing a moderate peak just after train onset). In auditory cortex (Heschl's gyrus and the superior temporal gyrus), amplitude changed somewhat with rate, but a far more striking change occurred in response waveshape—low rates elicited a sustained response, whereas high rates elicited an unusual phasic response that included prominent peaks just after train onset and offset. The shift in cortical response waveshape from sustained to phasic with increasing rate corresponds to a perceptual shift from individually resolved bursts to fused bursts forming a continuous (but modulated) percept. Thus at high rates, a train forms a single perceptual “event,” the onset and offset of which are delimited by the on and off peaks of phasic cortical responses. While auditory cortex showed a clear, qualitative correlation between perception and response waveshape, the medial geniculate body showed less correlation (since there was less change in waveshape with rate), and the inferior colliculus showed no correlation at all. Overall, our results suggest a population neural representation of the beginning and the end of distinct perceptual events that is weak or absent in the inferior colliculus, begins to emerge in the medial geniculate body, and is robust in auditory cortex.


2020 ◽  
Vol 123 (2) ◽  
pp. 695-706
Author(s):  
Lu Luo ◽  
Na Xu ◽  
Qian Wang ◽  
Liang Li

The central mechanisms underlying binaural unmasking for spectrally overlapping concurrent sounds, which are unresolved in the peripheral auditory system, remain largely unknown. In this study, frequency-following responses (FFRs) to two binaurally presented independent narrowband noises (NBNs) with overlapping spectra were recorded simultaneously in the inferior colliculus (IC) and auditory cortex (AC) in anesthetized rats. The results showed that for both IC FFRs and AC FFRs, introducing an interaural time difference (ITD) disparity between the two concurrent NBNs enhanced the representation fidelity, reflected by the increased coherence between the responses evoked by double-NBN stimulation and the responses evoked by single NBNs. The ITD disparity effect varied across frequency bands, being more marked for higher frequency bands in the IC and lower frequency bands in the AC. Moreover, the coherence between IC responses and AC responses was also enhanced by the ITD disparity, and the enhancement was most prominent for low-frequency bands and the IC and the AC on the same side. These results suggest a critical role of the ITD cue in the neural segregation of spectrotemporally overlapping sounds. NEW & NOTEWORTHY When two spectrally overlapped narrowband noises are presented at the same time with the same sound-pressure level, they mask each other. Introducing a disparity in interaural time difference between these two narrowband noises improves the accuracy of the neural representation of individual sounds in both the inferior colliculus and the auditory cortex. The lower frequency signal transformation from the inferior colliculus to the auditory cortex on the same side is also enhanced, showing the effect of binaural unmasking.


2005 ◽  
Vol 94 (4) ◽  
pp. 2263-2274 ◽  
Author(s):  
Jiping Zhang ◽  
Kyle T. Nakamoto ◽  
Leonard M. Kitzes

Sounds commonly occur in sequences, such as in speech. It is therefore important to understand how the occurrence of one sound affects the response to a subsequent sound. We approached this question by determining how a conditioning stimulus alters the response areas of single neurons in the primary auditory cortex (AI) of barbiturate-anesthetized cats. The response areas consisted of responses to stimuli that varied in level at the two ears and delivered at the characteristic frequency of each cell. A binaural conditioning stimulus was then presented ≥50 ms before each of the stimuli comprising the level response area. An effective preceding stimulus alters the shape and severely reduces the size and response magnitude of the level response area. This ability of the preceding stimulus depends on its proximity in the level domain to the level response area, not on its absolute level or on the size of the response it evokes. Preceding stimuli evoke a nonlinear inhibition across the level response area that results in an increased selectivity of a cortical neuron for its preferred binaural stimuli. The selectivity of AI neurons during the processing of a stream of acoustic stimuli is likely to be restricted to a portion of their level response areas apparent in the tone-alone condition. Thus rather than being static, level response areas are fluid; they can vary greatly in extent, shape and response magnitude. The dynamic modulation of the level response area and level selectivity of AI neurons might be related to several tasks confronting the central auditory system.


2019 ◽  
Author(s):  
Jesyin Lai ◽  
Stephen V. David

ABSTRACTChronic vagus nerve stimulation (VNS) can facilitate learning of sensory and motor behaviors. VNS is believed to trigger release of neuromodulators, including norepinephrine and acetylcholine, which can mediate cortical plasticity associated with learning. Most previous work has studied effects of VNS over many days, and less is known about how acute VNS influences neural coding and behavior over the shorter term. To explore this question, we measured effects of VNS on learning of an auditory discrimination over 1-2 days. Ferrets implanted with cuff electrodes on the vagus nerve were trained by classical conditioning on a tone frequency-reward association. One tone was associated with reward while another tone, was not. The frequencies and reward associations of the tones were changed every two days, requiring learning of a new relationship. When the tones (both rewarded and non-rewarded) were paired with VNS, rates of learning increased on the first day following a change in reward association. To examine VNS effects on auditory coding, we recorded single- and multi-unit neural activity in primary auditory cortex (A1) of passively listening animals following brief periods of VNS (20 trials/session) paired with tones. Because afferent VNS induces changes in pupil size associated with fluctuations in neuromodulation, we also measured pupil during recordings. After pairing VNS with a neuron’s best-frequency (BF) tone, responses in a subpopulation of neurons were reduced. Pairing with an off-BF tone or performing VNS during the inter-trial interval had no effect on responses. We separated the change in A1 activity into two components, one that could be predicted by fluctuations in pupil and one that persisted after VNS and was not accounted for by pupil. The BF-specific reduction in neural responses remained, even after regressing out changes that could be explained by pupil. In addition, the size of VNS-mediated changes in pupil predicted the magnitude of persistent changes in the neural response. This interaction suggests that changes in neuromodulation associated with arousal gate the long-term effects of VNS on neural activity. Taken together, these results support a role for VNS in auditory learning and help establish VNS as a tool to facilitate neural plasticity.


2017 ◽  
Author(s):  
Krishna C. Puvvada ◽  
Jonathan Z. Simon

AbstractThe ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically-based representations in the auditory nerve, into perceptually distinct auditory-objects based representation in auditory cortex. Here, using magnetoencephalography (MEG) recordings from human subjects, both men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in auditory cortex contain dominantly spectro-temporal based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. In contrast, we also show that higher order auditory cortical areas represent the attended stream separately, and with significantly higher fidelity, than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Taken together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of human auditory cortex.Significance StatementUsing magnetoencephalography (MEG) recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of auditory cortex. We show that the primary-like areas in auditory cortex use a dominantly spectro-temporal based representation of the entire auditory scene, with both attended and ignored speech streams represented with almost equal fidelity. In contrast, we show that higher order auditory cortical areas represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects.


eNeuro ◽  
2016 ◽  
Vol 3 (3) ◽  
pp. ENEURO.0071-16.2016 ◽  
Author(s):  
Yonatan I. Fishman ◽  
Christophe Micheyl ◽  
Mitchell Steinschneider

2004 ◽  
Vol 21 (3) ◽  
pp. 331-336 ◽  
Author(s):  
DAVID H. FOSTER ◽  
SÉRGIO M.C. NASCIMENTO ◽  
KINJIRO AMANO

If surfaces in a scene are to be distinguished by their color, their neural representation at some level should ideally vary little with the color of the illumination. Four possible neural codes were considered: von-Kries-scaled cone responses from single points in a scene, spatial ratios of cone responses produced by light reflected from pairs of points, and these quantities obtained with sharpened (opponent-cone) responses. The effectiveness of these codes in identifying surfaces was quantified by information-theoretic measures. Data were drawn from a sample of 25 rural and urban scenes imaged with a hyperspectral camera, which provided estimates of surface reflectance at 10-nm intervals at each of 1344 × 1024 pixels for each scene. In computer simulations, scenes were illuminated separately by daylights of correlated color temperatures 4000 K, 6500 K, and 25,000 K. Points were sampled randomly in each scene and identified according to each of the codes. It was found that the maximum information preserved under illuminant changes varied with the code, but for a particular code it was remarkably stable across the different scenes. The standard deviation over the 25 scenes was, on average, approximately 1 bit, suggesting that the neural coding of surface color can be optimized independent of location for any particular range of illuminants.


2004 ◽  
Vol 27 (5) ◽  
pp. 700-702
Author(s):  
Michael W. Spratling

Page is to be congratulated for challenging some misconceptions about neural representation. However, his target article, and the commentaries to it, highlight that the terms “local” and “distributed” are open to misinterpretation. These terms provide a poor description of neural coding strategies and a better taxonomy might resolve some of the issues.


2019 ◽  
Vol 64 (4) ◽  
pp. 481-493 ◽  
Author(s):  
Robert Kühler ◽  
Markus Weichenberger ◽  
Martin Bauer ◽  
Johannes Hensel ◽  
Rüdiger Brühl ◽  
...  

Abstract As airborne ultrasound can be found in many technical applications and everyday situations, the question as to whether sounds at these frequencies can be heard by human beings or whether they present a risk to their hearing system is of great practical relevance. To objectively study these issues, the monaural hearing threshold in the frequency range from 14 to 24 kHz was determined for 26 test subjects between 19 and 33 years of age using pure tone audiometry. The hearing threshold values increased strongly with increasing frequency up to around 21 kHz, followed by a range with a smaller slope toward 24 kHz. The number of subjects who could respond positively to the threshold measurements decreased dramatically above 21 kHz. Brain activation was then measured by means of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) and with acoustic stimuli at the same frequencies, with sound pressure levels (SPLs) above and below the individual threshold. No auditory cortex activation was found for levels below the threshold. Although test subjects reported audible sounds above the threshold, no brain activity was identified in the above-threshold case under current experimental conditions except at the highest sensation level, which was presented at the lowest test frequency.


Sign in / Sign up

Export Citation Format

Share Document