scholarly journals Auditory Cortex Tracks Both Auditory and Visual Stimulus Dynamics Using Low-Frequency Neuronal Phase Modulation

PLoS Biology ◽  
2010 ◽  
Vol 8 (8) ◽  
pp. e1000445 ◽  
Author(s):  
Huan Luo ◽  
Zuxiang Liu ◽  
David Poeppel
2013 ◽  
Vol 25 (2) ◽  
pp. 175-187 ◽  
Author(s):  
Jihoon Oh ◽  
Jae Hyung Kwon ◽  
Po Song Yang ◽  
Jaeseung Jeong

Neural responses in early sensory areas are influenced by top–down processing. In the visual system, early visual areas have been shown to actively participate in top–down processing based on their topographical properties. Although it has been suggested that the auditory cortex is involved in top–down control, functional evidence of topographic modulation is still lacking. Here, we show that mental auditory imagery for familiar melodies induces significant activation in the frequency-responsive areas of the primary auditory cortex (PAC). This activation is related to the characteristics of the imagery: when subjects were asked to imagine high-frequency melodies, we observed increased activation in the high- versus low-frequency response area; when the subjects were asked to imagine low-frequency melodies, the opposite was observed. Furthermore, we found that A1 is more closely related to the observed frequency-related modulation than R in tonotopic subfields of the PAC. Our findings suggest that top–down processing in the auditory cortex relies on a mechanism similar to that used in the perception of external auditory stimuli, which is comparable to early visual systems.


2020 ◽  
Vol 123 (2) ◽  
pp. 695-706
Author(s):  
Lu Luo ◽  
Na Xu ◽  
Qian Wang ◽  
Liang Li

The central mechanisms underlying binaural unmasking for spectrally overlapping concurrent sounds, which are unresolved in the peripheral auditory system, remain largely unknown. In this study, frequency-following responses (FFRs) to two binaurally presented independent narrowband noises (NBNs) with overlapping spectra were recorded simultaneously in the inferior colliculus (IC) and auditory cortex (AC) in anesthetized rats. The results showed that for both IC FFRs and AC FFRs, introducing an interaural time difference (ITD) disparity between the two concurrent NBNs enhanced the representation fidelity, reflected by the increased coherence between the responses evoked by double-NBN stimulation and the responses evoked by single NBNs. The ITD disparity effect varied across frequency bands, being more marked for higher frequency bands in the IC and lower frequency bands in the AC. Moreover, the coherence between IC responses and AC responses was also enhanced by the ITD disparity, and the enhancement was most prominent for low-frequency bands and the IC and the AC on the same side. These results suggest a critical role of the ITD cue in the neural segregation of spectrotemporally overlapping sounds. NEW & NOTEWORTHY When two spectrally overlapped narrowband noises are presented at the same time with the same sound-pressure level, they mask each other. Introducing a disparity in interaural time difference between these two narrowband noises improves the accuracy of the neural representation of individual sounds in both the inferior colliculus and the auditory cortex. The lower frequency signal transformation from the inferior colliculus to the auditory cortex on the same side is also enhanced, showing the effect of binaural unmasking.


2018 ◽  
Author(s):  
Christian D. Márton ◽  
Makoto Fukushima ◽  
Corrie R. Camalier ◽  
Simon R. Schultz ◽  
Bruno B. Averbeck

AbstractPredictive coding is a theoretical framework that provides a functional interpretation of top-down and bottom up interactions in sensory processing. The theory has suggested that specific frequency bands relay bottom-up and top-down information (e.g. “γ up, β down”). But it remains unclear whether this notion generalizes to cross-frequency interactions. Furthermore, most of the evidence so far comes from visual pathways. Here we examined cross-frequency coupling across four sectors of the auditory hierarchy in the macaque. We computed two measures of cross-frequency coupling, phase-amplitude coupling (PAC) and amplitude-amplitude coupling (AAC). Our findings revealed distinct patterns for bottom-up and top-down information processing among cross-frequency interactions. Both top-down and bottom-up made prominent use of low frequencies: low-to-low frequency (θ, α, β) and low frequency-to-high γ couplings were predominant top-down, while low frequency-to-low γ couplings were predominant bottom-up. These patterns were largely preserved across coupling types (PAC and AAC) and across stimulus types (natural and synthetic auditory stimuli), suggesting they are a general feature of information processing in auditory cortex. Moreover, our findings showed that low-frequency PAC alternated between predominantly top-down or bottom-up over time. Altogether, this suggests sensory information need not be propagated along separate frequencies upwards and downwards. Rather, information can be unmixed by having low frequencies couple to distinct frequency ranges in the target region, and by alternating top-down and bottom-up processing over time.1SignificanceThe brain consists of highly interconnected cortical areas, yet the patterns in directional cortical communication are not fully understood, in particular with regards to interactions between different signal components across frequencies. We employed a a unified, computationally advantageous Granger-causal framework to examine bi-directional cross-frequency interactions across four sectors of the auditory cortical hierarchy in macaques. Our findings extend the view of cross-frequency interactions in auditory cortex, suggesting they also play a prominent role in top-down processing. Our findings also suggest information need not be propagated along separate channels up and down the cortical hierarchy, with important implications for theories of information processing in the brain such as predictive coding.


2018 ◽  
Vol 32 (23) ◽  
pp. 1850276 ◽  
Author(s):  
Baozhu Cheng ◽  
Hong Hou ◽  
Nansha Gao

We introduced a rigid structure into the acoustic metasurface design, the proposed labyrinth structure is based on the equivalent medium theory and different media are replaced by curly labyrinth. Layered media theory and equivalent medium theory are combined to design the arbitrary acoustic metasurface structure. An acoustic metasurface studied in this paper realized simultaneous phase modulation and energy attenuation in the air, the effective phase modulation range covered from 30[Formula: see text] to 90[Formula: see text] and the energy attenuation is over 40%. According to layered media theory which could modulate the acoustic wave direction, the metasurface with same function can also be applied to underwater case. Corresponding simulation results are calculated by FEA. Finally, by introducing the curly labyrinth theory, the underwater acoustic metasurface with simultaneous phase modulation and energy attenuation is designed and verified. This paper has potential applications in rigid underwater acoustic metasurface designs with low frequency, adjustable direction and sound energy attenuation.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
K. J. Forseth ◽  
G. Hickok ◽  
P. S. Rollo ◽  
N. Tandon

Abstract Spoken language, both perception and production, is thought to be facilitated by an ensemble of predictive mechanisms. We obtain intracranial recordings in 37 patients using depth probes implanted along the anteroposterior extent of the supratemporal plane during rhythm listening, speech perception, and speech production. These reveal two predictive mechanisms in early auditory cortex with distinct anatomical and functional characteristics. The first, localized to bilateral Heschl’s gyri and indexed by low-frequency phase, predicts the timing of acoustic events. The second, localized to planum temporale only in language-dominant cortex and indexed by high-gamma power, shows a transient response to acoustic stimuli that is uniquely suppressed during speech production. Chronometric stimulation of Heschl’s gyrus selectively disrupts speech perception, while stimulation of planum temporale selectively disrupts speech production. This work illuminates the fundamental acoustic infrastructure—both architecture and function—for spoken language, grounding cognitive models of speech perception and production in human neurobiology.


2021 ◽  
Vol 13 ◽  
Author(s):  
Fuxin Ren ◽  
Wen Ma ◽  
Wei Zong ◽  
Ning Li ◽  
Xiao Li ◽  
...  

Presbycusis (PC) is characterized by preferential hearing loss at high frequencies and difficulty in speech recognition in noisy environments. Previous studies have linked PC to cognitive impairment, accelerated cognitive decline and incident Alzheimer’s disease. However, the neural mechanisms of cognitive impairment in patients with PC remain unclear. Although resting-state functional magnetic resonance imaging (rs-fMRI) studies have explored low-frequency oscillation (LFO) connectivity or amplitude of PC-related neural activity, it remains unclear whether the abnormalities occur within all frequency bands or within specific frequency bands. Fifty-one PC patients and fifty-one well-matched normal hearing controls participated in this study. The LFO amplitudes were investigated using the amplitude of low-frequency fluctuation (ALFF) at different frequency bands (slow-4 and slow-5). PC patients showed abnormal LFO amplitudes in the Heschl’s gyrus, dorsolateral prefrontal cortex (dlPFC), frontal eye field and key nodes of the speech network exclusively in slow-4, which suggested that abnormal spontaneous neural activity in PC was frequency dependent. Our findings also revealed that stronger functional connectivity between the dlPFC and the posterodorsal stream of auditory processing, as well as lower functional coupling between the PCC and key nodes of the DMN, which were associated with cognitive impairments in PC patients. Our study might underlie the cross-modal plasticity and higher-order cognitive participation of the auditory cortex after partial hearing deprivation. Our findings indicate that frequency-specific analysis of ALFF could provide valuable insights into functional alterations in the auditory cortex and non-auditory regions involved in cognitive impairment associated with PC.


1996 ◽  
Vol 35 (04) ◽  
pp. 112-115 ◽  
Author(s):  
Gaetano Paludetti ◽  
Walter di Nardo ◽  
Maria Calcagni ◽  
Daniela di Giuda ◽  
Giovanni Almadori ◽  
...  

Summary Aim: In order to assess the relationship between auditory cortex perfusion and the frequency of acoustic stimuli, twenty normally-hearing subjects underwent cerebral SPET. Methods: In 10 patients a multi-frequency stimulus (250-4000 Hz at 40 dB SL) was delivered, while 10 subjects were stimulated with a 500 Hz pure tone at 40 dB SL. The prestimulation SPET was subtracted from poststimulation study and auditory cortex activation was expressed as percent increments. Results: Contralateral cortex was the most active area with multi-frequency and monofrequency stimuli as well. A clear demonstration of a tonotopic distribution of acoustic stimuli in the auditory cortex was achieved. In addition, the accessory role played by homolateral acoustic areas was confirmed. Conclusion: The results of the present research support the hypothesis that brain SPET may be useful to obtain semiquantitative reliable information on low frequency auditory level in profoundly deaf patients. This may be achieved comparing the extension of the cortical areas activated by high-intensity multifrequency stimuli.


Sign in / Sign up

Export Citation Format

Share Document