Tactile Temporal Processing in the Auditory Cortex

2010 ◽  
Vol 22 (6) ◽  
pp. 1201-1211 ◽  
Author(s):  
Nadia Bolognini ◽  
Costanza Papagno ◽  
Daniela Moroni ◽  
Angelo Maravita

Perception of the outside world results from integration of information simultaneously derived via multiple senses. Increasing evidence suggests that the neural underpinnings of multisensory integration extend into the early stages of sensory processing. In the present study, we investigated whether the superior temporal gyrus (STG), an auditory modality-specific area, is critical for processing tactile events. Transcranial magnetic stimulation (TMS) was applied over the left STG and the left primary somatosensory cortex (SI) at different time intervals (60, 120, and 180 msec) during a tactile temporal discrimination task (Experiment 1) and a tactile spatial discrimination task (Experiment 2). Tactile temporal processing was disrupted when TMS was applied to SI at 60 msec after tactile presentation, confirming the modality specificity of this region. Crucially, TMS over STG also affected tactile temporal processing but at 180 msec delay. In both cases, the impairment was limited to the contralateral touches and was due to reduced perceptual sensitivity. In contrary, tactile spatial processing was impaired only by TMS over SI at 60–120 msec. These findings demonstrate the causal involvement of auditory areas in processing the duration of somatosensory events, suggesting that STG might play a supramodal role in temporal perception. Furthermore, the involvement of auditory cortex in somatosensory processing supports the view that multisensory integration occurs at an early stage of cortical processing.

2002 ◽  
Vol 88 (1) ◽  
pp. 540-543 ◽  
Author(s):  
John J. Foxe ◽  
Glenn R. Wylie ◽  
Antigona Martinez ◽  
Charles E. Schroeder ◽  
Daniel C. Javitt ◽  
...  

Using high-field (3 Tesla) functional magnetic resonance imaging (fMRI), we demonstrate that auditory and somatosensory inputs converge in a subregion of human auditory cortex along the superior temporal gyrus. Further, simultaneous stimulation in both sensory modalities resulted in activity exceeding that predicted by summing the responses to the unisensory inputs, thereby showing multisensory integration in this convergence region. Recently, intracranial recordings in macaque monkeys have shown similar auditory-somatosensory convergence in a subregion of auditory cortex directly caudomedial to primary auditory cortex (area CM). The multisensory region identified in the present investigation may be the human homologue of CM. Our finding of auditory-somatosensory convergence in early auditory cortices contributes to mounting evidence for multisensory integration early in the cortical processing hierarchy, in brain regions that were previously assumed to be unisensory.


2013 ◽  
Vol 25 (9) ◽  
pp. 1553-1562 ◽  
Author(s):  
Merav Sabri ◽  
Colin Humphries ◽  
Matthew Verber ◽  
Jain Mangalathu ◽  
Anjali Desai ◽  
...  

In the visual modality, perceptual demand on a goal-directed task has been shown to modulate the extent to which irrelevant information can be disregarded at a sensory-perceptual stage of processing. In the auditory modality, the effect of perceptual demand on neural representations of task-irrelevant sounds is unclear. We compared simultaneous ERPs and fMRI responses associated with task-irrelevant sounds across parametrically modulated perceptual task demands in a dichotic-listening paradigm. Participants performed a signal detection task in one ear (Attend ear) while ignoring task-irrelevant syllable sounds in the other ear (Ignore ear). Results revealed modulation of syllable processing by auditory perceptual demand in an ROI in middle left superior temporal gyrus and in negative ERP activity 130–230 msec post stimulus onset. Increasing the perceptual demand in the Attend ear was associated with a reduced neural response in both fMRI and ERP to task-irrelevant sounds. These findings are in support of a selection model whereby ongoing perceptual demands modulate task-irrelevant sound processing in auditory cortex.


2018 ◽  
Vol 30 (12) ◽  
pp. 1858-1869 ◽  
Author(s):  
Guannan Shen ◽  
Nathan J. Smyk ◽  
Andrew N. Meltzoff ◽  
Peter J. Marshall

The focus of the current study is on a particular aspect of tactile perception: categorical segmentation on the body surface into discrete body parts. The MMN has been shown to be sensitive to categorical boundaries and language experience in the auditory modality. Here we recorded the somatosensory MMN (sMMN) using two tactile oddball protocols and compared sMMN amplitudes elicited by within- and across-boundary oddball pairs. Both protocols employed the identity MMN method that controls for responsivity at each body location. In the first protocol, we investigated the categorical segmentation of tactile space at the wrist by presenting pairs of tactile oddball stimuli across equal spatial distances, either across the wrist or within the forearm. Amplitude of the sMMN elicited by stimuli presented across the wrist boundary was significantly greater than for stimuli presented within the forearm, suggesting a categorical effect at an early stage of somatosensory processing. The second protocol was designed to investigate the generality of this MMN effect, and involved three digits on one hand. Amplitude of the sMMN elicited by a contrast of the third digit and the thumb was significantly larger than a contrast between the third and fifth digits, suggesting a functional boundary effect that may derive from the way that objects are typically grasped. These findings demonstrate that the sMMN is a useful index of processing of somatosensory spatial discrimination that can be used to study body part categories.


2014 ◽  
Vol 26 (1) ◽  
pp. 334-345 ◽  
Author(s):  
Xiaoqing Zhu ◽  
Xia Liu ◽  
Fanfan Wei ◽  
Fang Wang ◽  
Michael M. Merzenich ◽  
...  

Abstract Low-level lead exposure is a risk factor for cognitive and learning disabilities in children and has been specifically associated with deficits in auditory temporal processing that impair aural language and reading abilities. Here, we show that rats exposed to low levels of lead in early life display a significant behavioral impairment in an auditory temporal rate discrimination task. Lead exposure also results in a degradation of the neuronal repetition-rate following capacity and response synchronization in primary auditory cortex. A modified go/no-go repetition-rate discrimination task applied in adult animals for ∼50 days nearly restores to normal these lead-induced deficits in cortical temporal fidelity. Cortical expressions of parvalbumin, brain-derived neurotrophic factor, and NMDA receptor subunits NR2a and NR2b, which are down-regulated in lead-exposed animals, are also partially reversed with training. These studies in an animal model identify the primary auditory cortex as a novel target for low-level lead exposure and demonstrate that perceptual training can ameliorate lead-induced deficits in cortical discrimination between sound sequences.


2020 ◽  
Vol 124 (6) ◽  
pp. 1798-1814
Author(s):  
Ralph E. Beitel ◽  
Christoph E. Schreiner ◽  
Maike Vollmer

Monkeys’ ability to generalize amplitude-modulation discrimination to nontrained carriers was limited to one octave below and 0.6 octave above the trained carrier frequency. Asymmetric generalization was paralleled by sharpening in cortical spectral tuning and enhanced firing-rate contrast between rewarded and nonrewarded sinusoidally amplitude-modulated stimuli at carriers near the trained frequency. The spectral content of the training stimulus specified spectral and temporal plasticity that may provide a neural substrate for limitations in generalization of temporal discrimination learning.


2019 ◽  
Vol 63 (3) ◽  
pp. 635-659
Author(s):  
Jingxin Luo ◽  
Vivian Guo Li ◽  
Peggy Pik Ki Mok

The study investigates the perception of vowel length contrasts in Cantonese by native Mandarin speakers with varying degrees of experience in Cantonese: naïve listeners (no exposure), inexperienced learners (~1 year), and experienced learners (~5 years). While vowel length contrasts do not exist in Mandarin, they are, to some extent, exploited in English, the second language (L2) of all the participants. Using an AXB discrimination task, we investigate how native and L2 phonological knowledge affects the acquisition of vowel length contrasts in a third language (L3). The results revealed that all participant groups could discriminate three contrastive vowel pairs (/aː/–/ɐ/, /ɛː/–/e/, /ɔː/–/o/), but their performance was influenced by the degree of Cantonese exposure, particularly for learners in the early stage of acquisition. In addition to vowel quality differences, durational differences were proposed to explain the perceptual patterns. Furthermore, L2 English perception of the participants was found to modulate the perception of L3 Cantonese vowel length contrasts. Our findings demonstrate the bi-directional interaction between languages acquired at different stages, and provide concrete data to evaluate some speech acquisition models.


1999 ◽  
Vol 82 (5) ◽  
pp. 2346-2357 ◽  
Author(s):  
Mitchell Steinschneider ◽  
Igor O. Volkov ◽  
M. Daniel Noh ◽  
P. Charles Garell ◽  
Matthew A. Howard

Voice onset time (VOT) is an important parameter of speech that denotes the time interval between consonant onset and the onset of low-frequency periodicity generated by rhythmic vocal cord vibration. Voiced stop consonants (/b/, /g/, and /d/) in syllable initial position are characterized by short VOTs, whereas unvoiced stop consonants (/p/, /k/, and t/) contain prolonged VOTs. As the VOT is increased in incremental steps, perception rapidly changes from a voiced stop consonant to an unvoiced consonant at an interval of 20–40 ms. This abrupt change in consonant identification is an example of categorical speech perception and is a central feature of phonetic discrimination. This study tested the hypothesis that VOT is represented within auditory cortex by transient responses time-locked to consonant and voicing onset. Auditory evoked potentials (AEPs) elicited by stop consonant-vowel (CV) syllables were recorded directly from Heschl's gyrus, the planum temporale, and the superior temporal gyrus in three patients undergoing evaluation for surgical remediation of medically intractable epilepsy. Voiced CV syllables elicited a triphasic sequence of field potentials within Heschl's gyrus. AEPs evoked by unvoiced CV syllables contained additional response components time-locked to voicing onset. Syllables with a VOT of 40, 60, or 80 ms evoked components time-locked to consonant release and voicing onset. In contrast, the syllable with a VOT of 20 ms evoked a markedly diminished response to voicing onset and elicited an AEP very similar in morphology to that evoked by the syllable with a 0-ms VOT. Similar response features were observed in the AEPs evoked by click trains. In this case, there was a marked decrease in amplitude of the transient response to the second click in trains with interpulse intervals of 20–25 ms. Speech-evoked AEPs recorded from the posterior superior temporal gyrus lateral to Heschl's gyrus displayed comparable response features, whereas field potentials recorded from three locations in the planum temporale did not contain components time-locked to voicing onset. This study demonstrates that VOT at least partially is represented in primary and specific secondary auditory cortical fields by synchronized activity time-locked to consonant release and voicing onset. Furthermore, AEPs exhibit features that may facilitate categorical perception of stop consonants, and these response patterns appear to be based on temporal processing limitations within auditory cortex. Demonstrations of similar speech-evoked response patterns in animals support a role for these experimental models in clarifying selected features of speech encoding.


2017 ◽  
Vol 20 (6) ◽  
pp. 1129-1136 ◽  
Author(s):  
Carlos Pinto ◽  
Inês Fortes ◽  
Armando Machado

2008 ◽  
Vol 20 (12) ◽  
pp. 2185-2197 ◽  
Author(s):  
Jennifer T. Coull ◽  
Bruno Nazarian ◽  
Franck Vidal

The temporal discrimination paradigm requires subjects to compare the duration of a probe stimulus to that of a sample previously stored in working or long-term memory, thus providing an index of timing that is independent of a motor response. However, the estimation process itself comprises several component cognitive processes, including timing, storage, retrieval, and comparison of durations. Previous imaging studies have attempted to disentangle these components by simply measuring brain activity during early versus late scanning epochs. We aim to improve the temporal resolution and precision of this approach by using rapid event-related functional magnetic resonance imaging to time-lock the hemodynamic response to presentation of the sample and probe stimuli themselves. Compared to a control (color-estimation) task, which was matched in terms of difficulty, sustained attention, and motor preparation requirements, we found selective activation of the left putamen for the storage (“encoding”) of stimulus duration into working memory (WM). Moreover, increased putamen activity was linked to enhanced timing performance, suggesting that the level of putamen activity may modulate the depth of temporal encoding. Retrieval and comparison of stimulus duration in WM selectively activated the right superior temporal gyrus. Finally, the supplementary motor area was equally active during both sample and probe stages of the task, suggesting a fundamental role in timing the duration of a stimulus that is currently unfolding in time.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Taishi Hosaka ◽  
Marino Kimura ◽  
Yuko Yotsumoto

AbstractWe have a keen sensitivity when it comes to the perception of our own voices. We can detect not only the differences between ourselves and others, but also slight modifications of our own voices. Here, we examined the neural correlates underlying such sensitive perception of one’s own voice. In the experiments, we modified the subjects’ own voices by using five types of filters. The subjects rated the similarity of the presented voices to their own. We compared BOLD (Blood Oxygen Level Dependent) signals between the voices that subjects rated as least similar to their own voice and those they rated as most similar. The contrast revealed that the bilateral superior temporal gyrus exhibited greater activities while listening to the voice least similar to their own voice and lesser activation while listening to the voice most similar to their own. Our results suggest that the superior temporal gyrus is involved in neural sharpening for the own-voice. The lesser degree of activations observed by the voices that were similar to the own-voice indicates that these areas not only respond to the differences between self and others, but also respond to the finer details of own-voices.


Sign in / Sign up

Export Citation Format

Share Document