Auditory-Somatosensory Multisensory Processing in Auditory Association Cortex: An fMRI Study

2002 ◽  
Vol 88 (1) ◽  
pp. 540-543 ◽  
Author(s):  
John J. Foxe ◽  
Glenn R. Wylie ◽  
Antigona Martinez ◽  
Charles E. Schroeder ◽  
Daniel C. Javitt ◽  
...  

Using high-field (3 Tesla) functional magnetic resonance imaging (fMRI), we demonstrate that auditory and somatosensory inputs converge in a subregion of human auditory cortex along the superior temporal gyrus. Further, simultaneous stimulation in both sensory modalities resulted in activity exceeding that predicted by summing the responses to the unisensory inputs, thereby showing multisensory integration in this convergence region. Recently, intracranial recordings in macaque monkeys have shown similar auditory-somatosensory convergence in a subregion of auditory cortex directly caudomedial to primary auditory cortex (area CM). The multisensory region identified in the present investigation may be the human homologue of CM. Our finding of auditory-somatosensory convergence in early auditory cortices contributes to mounting evidence for multisensory integration early in the cortical processing hierarchy, in brain regions that were previously assumed to be unisensory.

2018 ◽  
Author(s):  
Anna Dora Manca ◽  
Francesco Di Russo ◽  
Francesco Sigona ◽  
Mirko Grimaldi

How the brain encodes the speech acoustic signal into phonological representations (distinctive features) is a fundamental question for the neurobiology of language. Whether this process is characterized by tonotopic maps in primary or secondary auditory areas, with bilateral or leftward activity, remains a long-standing challenge. Magnetoencephalographic and ECoG studies have previously failed to show hierarchical and asymmetric hints for speech processing. We employed high-density electroencephalography to map the Salento Italian vowel system onto cortical sources using the N1 auditory evoked component. We found evidence that the N1 is characterized by hierarchical and asymmetric indexes structuring vowels representation. We identified them with two N1 subcomponents: the typical N1 (N1a) peaking at 125-135 ms and localized in the primary auditory cortex bilaterally with a tangential distribution and a late phase of the N1 (N1b) peaking at 145-155 ms and localized in the left superior temporal gyrus with a radial distribution. Notably, we showed that the processing of distinctive feature representations begins early in the primary auditory cortex and carries on in the superior temporal gyrus along lateral-medial, anterior-posterior and inferior-superior gradients. It is the dynamical interface of both auditory cortices and the interaction effects between different distinctive features that generate the categorical representations of vowels.


2020 ◽  
Author(s):  
L Feigin ◽  
G Tasaka ◽  
I Maor ◽  
A Mizrahi

AbstractThe mouse auditory cortex is comprised of several auditory fields spanning the dorso-ventral axis of the temporal lobe. The ventral most auditory field is the temporal association cortex (TeA), which remains largely unstudied. Using Neuropixels probes, we simultaneously recorded from primary auditory cortex (AUDp), secondary auditory cortex (AUDv) and TeA, characterizing neuronal responses to pure tones and frequency modulated (FM) sweeps in awake head-restrained mice. As compared to primary and secondary auditory cortices, single unit responses to pure tones in TeA were sparser, delayed and prolonged. Responses to FMs were also sparser. Population analysis showed that the sparser responses in TeA render it less sensitive to pure tones, yet more sensitive to FMs. When characterizing responses to pure tones under anesthesia, the distinct signature of TeA was changed considerably as compared to that in awake mice, implying that responses in TeA are strongly modulated by non-feedforward connections. Together with the known connectivity profile of TeA, these findings suggest that sparse representation of sounds in TeA supports selectivity to higher-order features of sounds and more complex auditory computations.


1998 ◽  
Vol 10 (2) ◽  
pp. 167-177 ◽  
Author(s):  
Linda L. Chao ◽  
Robert T. Knight

Neurological patients with focal lesions in the dorsolateral prefrontal cortex and age-matched control subjects were tested on an auditory version of the delayed-match-to-sample task employing environmental sounds. Subjects had to indicate whether a cue (S1) and a subsequent target sound (S2) were identical. On some trials, S1 and S2 were separated by a silent period of 5 sec. On other trials, the 5-sec delay between S1 and S2 was filled with irrelevant tone pips that served as distractors. Behaviorally, frontal patients were impaired by the presence of distractors. Electrophysiologically, patients generated enhanced primary auditory cortex-evoked responses to the tone pips, supporting a failure in inhibitory control of sensory processing after prefrontal damage. Intrahemispheric reductions of neural activity generated in the auditory association cortex and additional intrahemispheric reductions of attention-related frontal activity were also observed in the prefrontal patients. Together, these findings suggest that the dorsolateral prefrontal cortex is crucial for gating distracting information as well as maintaining distributed intrahemispheric neural activity during auditory working memory.


2006 ◽  
Vol 111 (5) ◽  
pp. 459-464 ◽  
Author(s):  
Steven A. Chance ◽  
Manuel F. Casanova ◽  
Andy E. Switala ◽  
Timothy J. Crow ◽  
Margaret M. Esiri

2021 ◽  
Vol 15 ◽  
Author(s):  
Agnès Trébuchon ◽  
F.-Xavier Alario ◽  
Catherine Liégeois-Chauvel

The posterior part of the superior temporal gyrus (STG) has long been known to be a crucial hub for auditory and language processing, at the crossroad of the functionally defined ventral and dorsal pathways. Anatomical studies have shown that this “auditory cortex” is composed of several cytoarchitectonic areas whose limits do not consistently match macro-anatomic landmarks like gyral and sulcal borders. The only method to record and accurately distinguish neuronal activity from the different auditory sub-fields of primary auditory cortex, located in the tip of Heschl and deeply buried in the Sylvian fissure, is to use stereotaxically implanted depth electrodes (Stereo-EEG) for pre-surgical evaluation of patients with epilepsy. In this prospective, we focused on how anatomo-functional delineation in Heschl’s gyrus (HG), Planum Temporale (PT), the posterior part of the STG anterior to HG, the posterior superior temporal sulcus (STS), and the region at the parietal-temporal boundary commonly labeled “SPT” can be achieved using data from electrical cortical stimulation combined with electrophysiological recordings during listening to pure tones and syllables. We show the differences in functional roles between the primary and non-primary auditory areas, in the left and the right hemispheres. We discuss how these findings help understanding the auditory semiology of certain epileptic seizures and, more generally, the neural substrate of hemispheric specialization for language.


eLife ◽  
2014 ◽  
Vol 3 ◽  
Author(s):  
Rajnish P Rao ◽  
Falk Mielke ◽  
Evgeny Bobrov ◽  
Michael Brecht

Social interactions involve multi-modal signaling. Here, we study interacting rats to investigate audio-haptic coordination and multisensory integration in the auditory cortex. We find that facial touch is associated with an increased rate of ultrasonic vocalizations, which are emitted at the whisking rate (∼8 Hz) and preferentially initiated in the retraction phase of whisking. In a small subset of auditory cortex regular-spiking neurons, we observed excitatory and heterogeneous responses to ultrasonic vocalizations. Most fast-spiking neurons showed a stronger response to calls. Interestingly, facial touch-induced inhibition in the primary auditory cortex and off-responses after termination of touch were twofold stronger than responses to vocalizations. Further, touch modulated the responsiveness of auditory cortex neurons to ultrasonic vocalizations. In summary, facial touch during social interactions involves precisely orchestrated calling-whisking patterns. While ultrasonic vocalizations elicited a rather weak population response from the regular spikers, the modulation of neuronal responses by facial touch was remarkably strong.


2021 ◽  
Author(s):  
Galit Agmon ◽  
Paz Har-Shai Yahav ◽  
Michal Ben-Shachar ◽  
Elana Zion Golumbic

AbstractDaily life is full of situations where many people converse at the same time. Under these noisy circumstances, individuals can employ different listening strategies to deal with the abundance of sounds around them. In this fMRI study we investigated how applying two different listening strategies – Selective vs. Distributed attention – affects the pattern of neural activity. Specifically, in a simulated ‘cocktail party’ paradigm, we compared brain activation patterns when listeners attend selectively to only one speaker and ignore all others, versus when they distribute their attention and attempt to follow two or four speakers at the same time. Results indicate that the two attention types activate a highly overlapping, bilateral fronto-temporal-parietal network of functionally connected regions. This network includes auditory association cortex (bilateral STG/STS) and higher-level regions related to speech processing and attention (bilateral IFG/insula, right MFG, left IPS). Within this network, responses in specific areas were modulated by the type of attention required. Specifically, auditory and speech-processing regions exhibited higher activity during Distributed attention, whereas fronto-parietal regions were activated more strongly during Selective attention. This pattern suggests that a common perceptual-attentional network is engaged when dealing with competing speech-inputs, regardless of the specific task at hand. At the same time, local activity within nodes of this network varies when implementing different listening strategies, reflecting the different cognitive demands they impose. These results nicely demonstrate the system’s flexibility to adapt its internal computations to accommodate different task requirements and listener goals.Significance StatementHearing many people talk simultaneously poses substantial challenges for the human perceptual and cognitive systems. We compared neural activity when listeners applied two different listening strategy to deal with these competing inputs: attending selectively to one speaker vs. distributing attention among all speakers. A network of functionally connected brain regions, involved in auditory processing, language processing and attentional control was activated when applying both attention types. However, activity within this network was modulated by the type of attention required and the number of competing speakers. These results suggest a common ‘attention to speech’ network, providing the computational infrastructure to deal effectively with multi-speaker input, but with sufficient flexibility to implement different prioritization strategies and to adapt to different listener goals.


2001 ◽  
Vol 86 (5) ◽  
pp. 2616-2620 ◽  
Author(s):  
Xiaoqin Wang ◽  
Siddhartha C. Kadia

A number of studies in various species have demonstrated that natural vocalizations generally produce stronger neural responses than do their time-reversed versions. The majority of neurons in the primary auditory cortex (A1) of marmoset monkeys responds more strongly to natural marmoset vocalizations than to the time-reversed vocalizations. However, it was unclear whether such differences in neural responses were simply due to the difference between the acoustic structures of natural and time-reversed vocalizations or whether they also resulted from the difference in behavioral relevance of both types of the stimuli. To address this issue, we have compared neural responses to natural and time-reversed marmoset twitter calls in A1 of cats with those obtained from A1 of marmosets using identical stimuli. It was found that the preference for natural marmoset twitter calls demonstrated in marmoset A1 was absent in cat A1. While both cortices responded approximately equally to time-reversed twitter calls, marmoset A1 responded much more strongly to natural twitter calls than did cat A1. This differential representation of marmoset vocalizations in two cortices suggests that experience-dependent and possibly species-specific mechanisms are involved in cortical processing of communication sounds.


1973 ◽  
Vol 38 (3) ◽  
pp. 320-325 ◽  
Author(s):  
Ronald R. Tasker ◽  
L. W. Organ

✓ Auditory hallucinations were produced by electrical stimulation of the human upper brain stem during stereotaxic operations. The responses were confined to stimulation of the inferior colliculus, brachium of the inferior colliculus, medial geniculate body, and auditory radiations. Anatomical confirmation of an auditory site was obtained in one patient. The hallucination produced was a low-pitched nonspecific auditory “paresthesia” independent of the structure stimulated, the conditions of stimulation, or sonotopic factors. The effect was identical to that reported from stimulating the primary auditory cortex, and virtually all responses were contralateral. These observations have led to the following generalizations concerning electrical stimulation of the somesthetic, auditory, vestibular, and visual pathways within the human brain stem: the hallucination induced in each is the response to comparable conditions of stimulation, is nonspecific, independent of stimulation site, confined to the primary pathway concerned, chiefly contralateral, and identical to that induced by stimulating the corresponding primary auditory cortex. No sensory responses are found in the brain stem corresponding to those from the sensory association cortex.


Sign in / Sign up

Export Citation Format

Share Document