scholarly journals Functional Topography of Auditory Areas Derived From the Combination of Electrophysiological Recordings and Cortical Electrical Stimulation

2021 ◽  
Vol 15 ◽  
Author(s):  
Agnès Trébuchon ◽  
F.-Xavier Alario ◽  
Catherine Liégeois-Chauvel

The posterior part of the superior temporal gyrus (STG) has long been known to be a crucial hub for auditory and language processing, at the crossroad of the functionally defined ventral and dorsal pathways. Anatomical studies have shown that this “auditory cortex” is composed of several cytoarchitectonic areas whose limits do not consistently match macro-anatomic landmarks like gyral and sulcal borders. The only method to record and accurately distinguish neuronal activity from the different auditory sub-fields of primary auditory cortex, located in the tip of Heschl and deeply buried in the Sylvian fissure, is to use stereotaxically implanted depth electrodes (Stereo-EEG) for pre-surgical evaluation of patients with epilepsy. In this prospective, we focused on how anatomo-functional delineation in Heschl’s gyrus (HG), Planum Temporale (PT), the posterior part of the STG anterior to HG, the posterior superior temporal sulcus (STS), and the region at the parietal-temporal boundary commonly labeled “SPT” can be achieved using data from electrical cortical stimulation combined with electrophysiological recordings during listening to pure tones and syllables. We show the differences in functional roles between the primary and non-primary auditory areas, in the left and the right hemispheres. We discuss how these findings help understanding the auditory semiology of certain epileptic seizures and, more generally, the neural substrate of hemispheric specialization for language.

2021 ◽  
Author(s):  
Agnès Trébuchon ◽  
F.-Xavier Alario ◽  
Catherine Liégeois-Chauvel

The posterior part of the superior temporal gyrus (STG) has long been known to be a crucial hub for auditory and language processing, at the crossroad of the functionally defined ventral and dorsal pathways. Anatomical studies have shown that this “auditory cortex” is composed of several cytoarchitectonic areas whose limits do not consistently match macro-anatomic landmarks like gyral and sulcal borders. The functional characterization of these areas derived from brain imaging studies has some limitations, even when high field functional magnetic resonance imaging (fMRI) is used, because of the variability observed in the extension of these areas between hemispheres and individuals. In patients implanted with depth electrodes, in vivo recordings and direct electrical stimulations of the different sub-parts of the posterior STG allow to delineate different auditory sub-fields in Heschl’s gyrus (HG), Planum Temporale (PT), the posterior part of the superior temporal gyrus anterior to HG, the posterior superior temporal sulcus (STS), and the region at the parietal-temporal boundary commonly labelled “Spt”. We describe how this delineation can be achieved using data from electrical cortical stimulation combined with local field potentials and time frequency analysis recorded as responses to pure tones and syllables. We show the differences in functional roles between the primary and non-primary auditory areas, in the left and the right hemispheres. We discuss how these findings help understanding the auditory semiology of certain epileptic seizures and, more generally, the neural substrate of hemispheric specialization for language.


Author(s):  
Vidhusha Srinivasan ◽  
N. Udayakumar ◽  
Kavitha Anandan

Background: The spectrum of autism encompasses High Functioning Autism (HFA) and Low Functioning Autism (LFA). Brain mapping studies have revealed that autism individuals have overlaps in brain behavioural characteristics. Generally, high functioning individuals are known to exhibit higher intelligence and better language processing abilities. However, specific mechanisms associated with their functional capabilities are still under research. Objective: This work addresses the overlapping phenomenon present in autism spectrum through functional connectivity patterns along with brain connectivity parameters and distinguishes the classes using deep belief networks. Methods: The task-based functional Magnetic Resonance Images (fMRI) of both high and low functioning autistic groups were acquired from ABIDE database, for 58 low functioning against 43 high functioning individuals while they were involved in a defined language processing task. The language processing regions of the brain, along with Default Mode Network (DMN) have been considered for the analysis. The functional connectivity maps have been plotted through graph theory procedures. Brain connectivity parameters such as Granger Causality (GC) and Phase Slope Index (PSI) have been calculated for the individual groups. These parameters have been fed to Deep Belief Networks (DBN) to classify the subjects under consideration as either LFA or HFA. Results: Results showed increased functional connectivity in high functioning subjects. It was found that the additional interaction of the Primary Auditory Cortex lying in the temporal lobe, with other regions of interest complimented their enhanced connectivity. Results were validated using DBN measuring the classification accuracy of 85.85% for high functioning and 81.71% for the low functioning group. Conclusion: Since it is known that autism involves enhanced, but imbalanced components of intelligence, the reason behind the supremacy of high functioning group in language processing and region responsible for enhanced connectivity has been recognized. Therefore, this work that suggests the effect of Primary Auditory Cortex in characterizing the dominance of language processing in high functioning young adults seems to be highly significant in discriminating different groups in autism spectrum.


2006 ◽  
Vol 18 (11) ◽  
pp. 1789-1798 ◽  
Author(s):  
Angela Bartolo ◽  
Francesca Benuzzi ◽  
Luca Nocetti ◽  
Patrizia Baraldi ◽  
Paolo Nichelli

Humor is a unique ability in human beings. Suls [A two-stage model for the appreciation of jokes and cartoons. In P. E. Goldstein & J. H. McGhee (Eds.), The psychology of humour. Theoretical perspectives and empirical issues. New York: Academic Press, 1972, pp. 81–100] proposed a two-stage model of humor: detection and resolution of incongruity. Incongruity is generated when a prediction is not confirmed in the final part of a story. To comprehend humor, it is necessary to revisit the story, transforming an incongruous situation into a funny, congruous one. Patient and neuroimaging studies carried out until now lead to different outcomes. In particular, patient studies found that right brain-lesion patients have difficulties in humor comprehension, whereas neuroimaging studies suggested a major involvement of the left hemisphere in both humor detection and comprehension. To prevent activation of the left hemisphere due to language processing, we devised a nonverbal task comprising cartoon pairs. Our findings demonstrate activation of both the left and the right hemispheres when comparing funny versus nonfunny cartoons. In particular, we found activation of the right inferior frontal gyrus (BA 47), the left superior temporal gyrus (BA 38), the left middle temporal gyrus (BA 21), and the left cerebellum. These areas were also activated in a nonverbal task exploring attribution of intention [Brunet, E., Sarfati, Y., Hardy-Bayle, M. C., & Decety, J. A PET investigation of the attribution of intentions with a nonverbal task. Neuroimage, 11, 157–166, 2000]. We hypothesize that the resolution of incongruity might occur through a process of intention attribution. We also asked subjects to rate the funniness of each cartoon pair. A parametric analysis showed that the left amygdala was activated in relation to subjective amusement. We hypothesize that the amygdala plays a key role in giving humor an emotional dimension.


2002 ◽  
Vol 88 (1) ◽  
pp. 540-543 ◽  
Author(s):  
John J. Foxe ◽  
Glenn R. Wylie ◽  
Antigona Martinez ◽  
Charles E. Schroeder ◽  
Daniel C. Javitt ◽  
...  

Using high-field (3 Tesla) functional magnetic resonance imaging (fMRI), we demonstrate that auditory and somatosensory inputs converge in a subregion of human auditory cortex along the superior temporal gyrus. Further, simultaneous stimulation in both sensory modalities resulted in activity exceeding that predicted by summing the responses to the unisensory inputs, thereby showing multisensory integration in this convergence region. Recently, intracranial recordings in macaque monkeys have shown similar auditory-somatosensory convergence in a subregion of auditory cortex directly caudomedial to primary auditory cortex (area CM). The multisensory region identified in the present investigation may be the human homologue of CM. Our finding of auditory-somatosensory convergence in early auditory cortices contributes to mounting evidence for multisensory integration early in the cortical processing hierarchy, in brain regions that were previously assumed to be unisensory.


1993 ◽  
Vol 5 (2) ◽  
pp. 235-253 ◽  
Author(s):  
Helen J. Neville ◽  
Sharon A. Coffey ◽  
Phillip J. Holcomb ◽  
Paula Tallal

Clinical, behavioral, and neurophysiological studies of developmental language impairment (LI), including reading disability (RD), have variously emphasized different factors that may contribute to this disorder. These include abnormal sensory processing within both the auditory and visual modalities and deficits in linguistic skills and in general cognitive abilities. In this study we employed the event-related brain potential (ERP) technique in a series of studies to probe and compare Merent aspects of functioning within the same sample of LI/RD children. Within the group multiple aspects of processing were affected, but heterogeneously across the sample. ERP components linked to processing within the superior temporal gyrus were abnormal in a subset of children that displayed abnormal performance on an auditory temporal discrimination task. An early component of the visual ERP was reduced in amplitude in the group as a whole. The relevance of this effect to current conceptions of substreams within the visual system is discussed. During a sentence processing task abnormal hemispheric specialization was observed in a subset of children who scored poorly on tests of grammar. By contrast the group as a whole displayed abnormally large responses to words requiring contextual integration. The results imply that multiple factors can contribute to the profile of language impairment and that different and specific deficits occur heterogeneously across populations of LI/RD children.


2009 ◽  
Vol 24 (S1) ◽  
pp. 1-1
Author(s):  
P. Ferreira ◽  
S. Simões ◽  
J. Cerqueira ◽  
J. Soares-Fernandes ◽  
Á. Machado

Introduction:Although probably undereported, musical hallucinosis is very rare and usually bilateral. It refers to auditory complex hallucinations, for which the patient has full insight, and includes melodies, tunes, rhythms and timbres.Clinical case:A 71-year-old women was seen for a history of hearing music in the right ear. She had mild hypertension and auricular fibrillation, being chronically medicated with aspirine, bisoprolol and hydroclorothiazide. Three months previously she started hearing some popular folk Portuguese songs in the right ear. She could identify the lyrics and sing the songs she heard. Weeks later fado and classical music were added to the repertoire, and later on she started hearing less well-formed sounds like “dlam... dlam” or “uhh... uhh”. There were no other auditory or visual hallucinations. She was seen by an otorhinolaryngologist, and made an audiogram showing bilateral, right-predominant, pre-coclear deafness with normal evoked brainstem auditory potentials. An MRI showed small deep subcortical lacunar lesions. EEG was normal. PET scan showed left temporal hypometabolism. On benzodiazepines she had discrete improvement.Conclusion:Musical hallucinosis has been found mainly in deaf patients, and a similar mechanism to that of Charles-Bonnet syndrome has been proposed. Sensory deprivation of primary auditory cortex would “release” the secondary auditory cortex, to produce complex auditory hallucinations with full insight. In our patient we were able to demonstrate the integrity of the brainstem pathway, supporting a direct link between diminished right ear sound transmission and left temporal lobe diminished activation as ascertained by the pet scan.


1987 ◽  
Vol 57 (6) ◽  
pp. 1746-1766 ◽  
Author(s):  
G. L. Kavanagh ◽  
J. B. Kelly

Ferrets were tested in a semicircular apparatus to determine the effects of auditory cortical lesions on their ability to localize sounds in space. They were trained to initiate trials while facing forward in the apparatus, and sounds were presented from one of two loudspeakers located in the horizontal plane. Minimum audible angles were obtained for three different positions, viz., the left hemifield, with loudspeakers centered around -60 degrees azimuth; the right hemifield, with loudspeakers centered around +60 degrees azimuth; and the midline with loudspeakers centered around 0 degrees azimuth. Animals with large bilateral lesions had severe impairments in localizing a single click in the midline test. Following complete destruction of the auditory cortex performance was only marginally above the level expected by chance even at large angles of speaker separation. Severe impairments were also found in localization of single clicks in both left and right lateral fields. In contrast, bilateral lesions restricted to the primary auditory cortex resulted in minimal impairments in midline localization. The same lesions, however, produced severe impairments in localization of single clicks in both left and right lateral fields. Large unilateral lesions that destroyed auditory cortex in one hemisphere resulted in an inability to localize single clicks in the contralateral hemifield. In contrast, no impairments were found in the midline test or in the ipsilateral hemifield. Unilateral lesions of the primary auditory cortex resulted in severe contralateral field deficits equivalent to those seen following complete unilateral destruction of auditory cortex. No deficits were seen in either the midline or the ipsilateral tests.


2019 ◽  
Author(s):  
Agnès Job ◽  
Anne Kavounoudias ◽  
Chloé Jaroszynski ◽  
Assia Jaillard ◽  
Chantal Delon-Martin

ABSTRACTTinnitus mechanisms remain poorly understood. Our previous functional MRI (fMRI) studies demonstrated an abnormal hyperactivity in the right parietal operculum 3 (OP3) in acoustic trauma tinnitus and during provoked phantom sound perceptions without hearing loss, which lead us to propose a new model of tinnitus. This new model is not directly linked with hearing loss and primary auditory cortex abnormalities, but with a proprioceptive disturbance related to middle-ear muscles. In the present study, a seed-based resting-state functional MRI method was used to explore the potential abnormal connectivity of this opercular region between an acoustic trauma tinnitus group presenting slight to mild tinnitus and a control group. Primary auditory cortex seeds were also explored because they were thought to be directly involved in tinnitus in most current models. In such a model, hearing loss and tinnitus handicap were confounding factors and were therefore regressed in our analysis. Between-groups comparisons showed a significant specific connectivity between the right OP3 seeds and the potential human homologue of the premotor ear-eye field (H-PEEF) bilaterally and the inferior parietal lobule (IPL) in the tinnitus group. Our findings suggest the existence of a simultaneous premotor ear-eye disturbance in tinnitus that could lift the veil on unexplained subclinical abnormalities in oculomotor tests found in tinnitus patients with normal vestibular responses. The present work confirms the involvement of the OP3 subregion in acoustic trauma tinnitus and provides some new clues to explain its putative mechanisms.


2018 ◽  
Author(s):  
Anna Dora Manca ◽  
Francesco Di Russo ◽  
Francesco Sigona ◽  
Mirko Grimaldi

How the brain encodes the speech acoustic signal into phonological representations (distinctive features) is a fundamental question for the neurobiology of language. Whether this process is characterized by tonotopic maps in primary or secondary auditory areas, with bilateral or leftward activity, remains a long-standing challenge. Magnetoencephalographic and ECoG studies have previously failed to show hierarchical and asymmetric hints for speech processing. We employed high-density electroencephalography to map the Salento Italian vowel system onto cortical sources using the N1 auditory evoked component. We found evidence that the N1 is characterized by hierarchical and asymmetric indexes structuring vowels representation. We identified them with two N1 subcomponents: the typical N1 (N1a) peaking at 125-135 ms and localized in the primary auditory cortex bilaterally with a tangential distribution and a late phase of the N1 (N1b) peaking at 145-155 ms and localized in the left superior temporal gyrus with a radial distribution. Notably, we showed that the processing of distinctive feature representations begins early in the primary auditory cortex and carries on in the superior temporal gyrus along lateral-medial, anterior-posterior and inferior-superior gradients. It is the dynamical interface of both auditory cortices and the interaction effects between different distinctive features that generate the categorical representations of vowels.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Anja Pflug ◽  
Florian Gompf ◽  
Muthuraman Muthuraman ◽  
Sergiu Groppa ◽  
Christian Alexander Kell

Rhythmic actions benefit from synchronization with external events. Auditory-paced finger tapping studies indicate the two cerebral hemispheres preferentially control different rhythms. It is unclear whether left-lateralized processing of faster rhythms and right-lateralized processing of slower rhythms bases upon hemispheric timing differences that arise in the motor or sensory system or whether asymmetry results from lateralized sensorimotor interactions. We measured fMRI and MEG during symmetric finger tapping, in which fast tapping was defined as auditory-motor synchronization at 2.5 Hz. Slow tapping corresponded to tapping to every fourth auditory beat (0.625 Hz). We demonstrate that the left auditory cortex preferentially represents the relative fast rhythm in an amplitude modulation of low beta oscillations while the right auditory cortex additionally represents the internally generated slower rhythm. We show coupling of auditory-motor beta oscillations supports building a metric structure. Our findings reveal a strong contribution of sensory cortices to hemispheric specialization in action control.


Sign in / Sign up

Export Citation Format

Share Document