scholarly journals Corticothalamic Pathways in Auditory Processing: Recent Advances and Insights From Other Sensory Systems

2021 ◽  
Vol 15 ◽  
Author(s):  
Flora M. Antunes ◽  
Manuel S. Malmierca

The corticothalamic (CT) pathways emanate from either Layer 5 (L5) or 6 (L6) of the neocortex and largely outnumber the ascending, thalamocortical pathways. The CT pathways provide the anatomical foundations for an intricate, bidirectional communication between thalamus and cortex. They act as dynamic circuits of information transfer with the ability to modulate or even drive the response properties of target neurons at each synaptic node of the circuit. L6 CT feedback pathways enable the cortex to shape the nature of its driving inputs, by directly modulating the sensory message arriving at the thalamus. L5 CT pathways can drive the postsynaptic neurons and initiate a transthalamic corticocortical circuit by which cortical areas communicate with each other. For this reason, L5 CT pathways place the thalamus at the heart of information transfer through the cortical hierarchy. Recent evidence goes even further to suggest that the thalamus via CT pathways regulates functional connectivity within and across cortical regions, and might be engaged in cognition, behavior, and perceptual inference. As descending pathways that enable reciprocal and context-dependent communication between thalamus and cortex, we venture that CT projections are particularly interesting in the context of hierarchical perceptual inference formulations such as those contemplated in predictive processing schemes, which so far heavily rely on cortical implementations. We discuss recent proposals suggesting that the thalamus, and particularly higher order thalamus via transthalamic pathways, could coordinate and contextualize hierarchical inference in cortical hierarchies. We will explore these ideas with a focus on the auditory system.

Author(s):  
Mattson Ogg ◽  
L. Robert Slevc

Music and language are uniquely human forms of communication. What neural structures facilitate these abilities? This chapter conducts a review of music and language processing that follows these acoustic signals as they ascend the auditory pathway from the brainstem to auditory cortex and on to more specialized cortical regions. Acoustic, neural, and cognitive mechanisms are identified where processing demands from both domains might overlap, with an eye to examples of experience-dependent cortical plasticity, which are taken as strong evidence for common neural substrates. Following an introduction describing how understanding musical processing informs linguistic or auditory processing more generally, findings regarding the major components (and parallels) of music and language research are reviewed: pitch perception, syntax and harmonic structural processing, semantics, timbre and speaker identification, attending in auditory scenes, and rhythm. Overall, the strongest evidence that currently exists for neural overlap (and cross-domain, experience-dependent plasticity) is in the brainstem, followed by auditory cortex, with evidence and the potential for overlap becoming less apparent as the mechanisms involved in music and speech perception become more specialized and distinct at higher levels of processing.


2019 ◽  
Author(s):  
Jérémy Giroud ◽  
Agnès Trébuchon ◽  
Daniele Schön ◽  
Patrick Marquis ◽  
Catherine Liegeois-Chauvel ◽  
...  

AbstractSpeech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.Author summarySpeech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric sampling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric sampling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.


PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


2019 ◽  
Vol 116 (25) ◽  
pp. 12506-12515 ◽  
Author(s):  
Mohammad Bagher Khamechian ◽  
Vladislav Kozyrev ◽  
Stefan Treue ◽  
Moein Esghaei ◽  
Mohammad Reza Daliri

Efficient transfer of sensory information to higher (motor or associative) areas in primate visual cortical areas is crucial for transforming sensory input into behavioral actions. Dynamically increasing the level of coordination between single neurons has been suggested as an important contributor to this efficiency. We propose that differences between the functional coordination in different visual pathways might be used to unambiguously identify the source of input to the higher areas, ensuring a proper routing of the information flow. Here we determined the level of coordination between neurons in area MT in macaque visual cortex in a visual attention task via the strength of synchronization between the neurons’ spike timing relative to the phase of oscillatory activities in local field potentials. In contrast to reports on the ventral visual pathway, we observed the synchrony of spikes only in the range of high gamma (180 to 220 Hz), rather than gamma (40 to 70 Hz) (as reported previously) to predict the animal’s reaction speed. This supports a mechanistic role of the phase of high-gamma oscillatory activity in dynamically modulating the efficiency of neuronal information transfer. In addition, for inputs to higher cortical areas converging from the dorsal and ventral pathway, the distinct frequency bands of these inputs can be leveraged to preserve the identity of the input source. In this way source-specific oscillatory activity in primate cortex can serve to establish and maintain “functionally labeled lines” for dynamically adjusting cortical information transfer and multiplexing converging sensory signals.


2019 ◽  
Vol 30 (3) ◽  
pp. 942-951 ◽  
Author(s):  
Lanfang Liu ◽  
Yuxuan Zhang ◽  
Qi Zhou ◽  
Douglas D Garrett ◽  
Chunming Lu ◽  
...  

Abstract Whether auditory processing of speech relies on reference to the articulatory motor information of speaker remains elusive. Here, we addressed this issue under a two-brain framework. Functional magnetic resonance imaging was applied to record the brain activities of speakers when telling real-life stories and later of listeners when listening to the audio recordings of these stories. Based on between-brain seed-to-voxel correlation analyses, we revealed that neural dynamics in listeners’ auditory temporal cortex are temporally coupled with the dynamics in the speaker’s larynx/phonation area. Moreover, the coupling response in listener’s left auditory temporal cortex follows the hierarchical organization for speech processing, with response lags in A1+, STG/STS, and MTG increasing linearly. Further, listeners showing greater coupling responses understand the speech better. When comprehension fails, such interbrain auditory-articulation coupling vanishes substantially. These findings suggest that a listener’s auditory system and a speaker’s articulatory system are inherently aligned during naturalistic verbal interaction, and such alignment is associated with high-level information transfer from the speaker to the listener. Our study provides reliable evidence supporting that references to the articulatory motor information of speaker facilitate speech comprehension under a naturalistic scene.


2019 ◽  
Vol 19 (2) ◽  
pp. 78-87 ◽  
Author(s):  
Martin Kronenbuerger ◽  
Jun Hua ◽  
Jee Y.A. Bang ◽  
Kia E. Ultz ◽  
Xinyuan Miao ◽  
...  

Background: Huntington’s disease (HD) is a progressive neurodegenerative disorder. The striatum is one of the first brain regions that show detectable atrophy in HD. Previous studies using functional magnetic resonance imaging (fMRI) at 3 tesla (3 T) revealed reduced functional connectivity between striatum and motor cortex in the prodromal period of HD. Neuroanatomical and neurophysiological studies have suggested segregated corticostriatal pathways with distinct loops involving different cortical regions, which may be investigated using fMRI at an ultra-high field (7 T) with enhanced sensitivity compared to lower fields. Objectives: We performed fMRI at 7 T to assess functional connectivity between the striatum and several chosen cortical areas including the motor and prefrontal cortex, in order to better understand brain changes in the striatum-cortical pathways. Method: 13 manifest subjects (age 51 ± 13 years, cytosine-adenine-guanine [CAG] repeat 45 ± 5, Unified Huntington’s Disease Rating Scale [UHDRS] motor score 32 ± 17), 8 subjects in the close-to-onset premanifest period (age 38 ± 10 years, CAG repeat 44 ± 2, UHDRS motor score 8 ± 2), 11 subjects in the far-from-onset premanifest period (age 38 ± 11 years, CAG repeat 42 ± 2, UHDRS motor score 1 ± 2), and 16 healthy controls (age 44 ± 15 years) were studied. The functional connectivity between the striatum and several cortical areas was measured by resting state fMRI at 7 T and analyzed in all participants. Results: Compared to controls, functional connectivity between striatum and premotor area, supplementary motor area, inferior frontal as well as middle frontal regions was altered in HD (all p values <0.001). Specifically, decreased striatum-motor connectivity but increased striatum-prefrontal connectivity were found in premanifest HD subjects. Altered functional connectivity correlated consistently with genetic burden, but not with clinical scores. Conclusions: Differential changes in functional connectivity of striatum-prefrontal and striatum-motor circuits can be found in early and premanifest HD. This may imply a compensatory mechanism, where additional cortical regions are recruited to subserve functions that have been impaired due to HD pathology. Our results suggest the potential value of functional connectivity as a marker for future clinical trials in HD.


2003 ◽  
Vol 13 (10) ◽  
pp. 2845-2856 ◽  
Author(s):  
WALTER J. FREEMAN ◽  
GYöNGYI GAÁL ◽  
REBECKA JORSTEN

Information transfer and integration among functionally distinct areas of cerebral cortex of oscillatory activity require some degree of phase synchrony of the trains of action potentials that carry the information prior to the integration. However, propagation delays are obligatory. Delays vary with the lengths and conduction velocities of the axons carrying the information, causing phase dispersion. In order to determine how synchrony is achieved despite dispersion, we recorded EEG signals from multiple electrode arrays on five cortical areas in cats and rabbits, that had been trained to discriminate visual or auditory conditioned stimuli. Analysis by time-lagged correlation, multiple correlation and PCA, showed that maximal correlation was at zero lag and averaged 0.7, indicating that 50% of the power in the gamma range among the five areas was at zero lag irrespective of phase or frequency. There were no stimulus-related episodes of transiently increased phase locking among the areas, nor EEG "bursts" of transiently increased amplitude above the sustained level of synchrony. Three operations were identified to account for the sustained correlation. Cortices broadcast their outputs over divergent–convergent axonal pathways that performed spatial ensemble averaging; synaptic interactions between excitatory and inhibitory neurons in cortex operated as band pass filters for gamma; and signal coarse-graining by pulse frequency modulation at trigger zones enhanced correlation. The conclusion is that these three operations enable continuous linkage of multiple cortical areas by activity in the gamma range, providing the basis for coordinated cortical output to other parts of the brain, despite varying axonal conduction delays, something like the back plane of a main frame computer.


2020 ◽  
Author(s):  
Vesa Putkinen ◽  
Sanaz Nazari-Farsani ◽  
Kerttu Seppälä ◽  
Tomi Karjalainen ◽  
Lihua Sun ◽  
...  

Abstract Music can induce strong subjective experience of emotions, but it is debated whether these responses engage the same neural circuits as emotions elicited by biologically significant events. We examined the functional neural basis of music-induced emotions in a large sample (n = 102) of subjects who listened to emotionally engaging (happy, sad, fearful, and tender) pieces of instrumental music while their hemodynamic brain activity was measured with functional magnetic resonance imaging (fMRI). Ratings of the four categorical emotions and liking were used to predict hemodynamic responses in general linear model (GLM) analysis of the fMRI data. Multivariate pattern analysis (MVPA) was used to reveal discrete neural signatures of the four categories of music-induced emotions. To map neural circuits governing non-musical emotions, the subjects were scanned while viewing short emotionally evocative film clips. The GLM revealed that most emotions were associated with activity in the auditory, somatosensory, and motor cortices, cingulate gyrus, insula, and precuneus. Fear and liking also engaged the amygdala. In contrast, the film clips strongly activated limbic and cortical regions implicated in emotional processing. MVPA revealed that activity in the auditory cortex and primary motor cortices reliably discriminated the emotion categories. Our results indicate that different music-induced basic emotions have distinct representations in regions supporting auditory processing, motor control, and interoception but do not strongly rely on limbic and medial prefrontal regions critical for emotions with survival value.


Author(s):  
Erika Atucha ◽  
Celia Fuerst ◽  
Magdalena Sauvage

Studies on patient H.M inspired many experiments on the role of the hippocampus and the neocortex in retrieving recent and remote memories. Cortical regions become increasingly engaged for memory retrieval over time, while conflicting results emerge regarding the engagement of the hippocampus, suggested to be ongoing by some or restricted to the retrieval of recent memories by others. In the study of Lux et al, 2016 we tested that this discrepancy might stem from failing to dissociate CA1 from CA3s contribution to memory retrieval over time as CA3 is known to support computations more sensitive to time than CA1. We also reported that parahippocampal cortical areas with tied anatomical connections with the hippocampus were increasingly engaged over time (Lux et al., elife , 2016). This study used a fear conditioning paradigm as emotionally arousing experiences are better remembered than memories devoid of fear content. Here we address whether the differential contribution of brain regions is a general mechanism also subserving memory retrieval devoid of fear content. We succeeded in developing an object-in-place task to investigate remote memory retrieval up to 6 months and the contribution of CA1, CA3, parahippocampal and prefrontal cortical areas to the retrieval of recent versus very remote memories using a high resolution molecular imaging technique based on the detection of the IEG RNA Arc. Preliminary results show that the disengagement of CA3 and persistent engagement of CA1 seem to be a general mechanism in supporting retrieval of remote memory for events.


Sign in / Sign up

Export Citation Format

Share Document