scholarly journals Perceptual Demand Modulates Activation of Human Auditory Cortex in Response to Task-irrelevant Sounds

2013 ◽  
Vol 25 (9) ◽  
pp. 1553-1562 ◽  
Author(s):  
Merav Sabri ◽  
Colin Humphries ◽  
Matthew Verber ◽  
Jain Mangalathu ◽  
Anjali Desai ◽  
...  

In the visual modality, perceptual demand on a goal-directed task has been shown to modulate the extent to which irrelevant information can be disregarded at a sensory-perceptual stage of processing. In the auditory modality, the effect of perceptual demand on neural representations of task-irrelevant sounds is unclear. We compared simultaneous ERPs and fMRI responses associated with task-irrelevant sounds across parametrically modulated perceptual task demands in a dichotic-listening paradigm. Participants performed a signal detection task in one ear (Attend ear) while ignoring task-irrelevant syllable sounds in the other ear (Ignore ear). Results revealed modulation of syllable processing by auditory perceptual demand in an ROI in middle left superior temporal gyrus and in negative ERP activity 130–230 msec post stimulus onset. Increasing the perceptual demand in the Attend ear was associated with a reduced neural response in both fMRI and ERP to task-irrelevant sounds. These findings are in support of a selection model whereby ongoing perceptual demands modulate task-irrelevant sound processing in auditory cortex.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Taishi Hosaka ◽  
Marino Kimura ◽  
Yuko Yotsumoto

AbstractWe have a keen sensitivity when it comes to the perception of our own voices. We can detect not only the differences between ourselves and others, but also slight modifications of our own voices. Here, we examined the neural correlates underlying such sensitive perception of one’s own voice. In the experiments, we modified the subjects’ own voices by using five types of filters. The subjects rated the similarity of the presented voices to their own. We compared BOLD (Blood Oxygen Level Dependent) signals between the voices that subjects rated as least similar to their own voice and those they rated as most similar. The contrast revealed that the bilateral superior temporal gyrus exhibited greater activities while listening to the voice least similar to their own voice and lesser activation while listening to the voice most similar to their own. Our results suggest that the superior temporal gyrus is involved in neural sharpening for the own-voice. The lesser degree of activations observed by the voices that were similar to the own-voice indicates that these areas not only respond to the differences between self and others, but also respond to the finer details of own-voices.


2006 ◽  
Vol 18 (5) ◽  
pp. 689-700 ◽  
Author(s):  
M. Sabri ◽  
E. Liebenthal ◽  
E. J. Waldron ◽  
D. A. Medler ◽  
J. R. Binder

Little is known about the neural mechanisms that control attentional modulation of deviance detection in the auditory modality. In this study, we manipulated the difficulty of a primary task to test the relation between task difficulty and the detection of infrequent, task-irrelevant deviant (D) tones (1300 Hz) presented among repetitive standard (S) tones (1000 Hz). Simultaneous functional magnetic resonance imaging (fMRI)/event-related potentials (ERPs) were recorded from 21 subjects performing a two-alternative forced-choice duration discrimination task (short and long tones of equal probability). The duration of the short tone was always 50 msec. The duration of the long tone was 100 msec in the easy task and 60 msec in the difficult task. As expected, response accuracy decreased and response time (RT) increased in the difficult compared with the easy task. Performance was also poorer for D than for S tones, indicating distraction by task-irrelevant frequency information on trials involving D tones. In the difficult task, an amplitude increase was observed in the difference waves for N1 and P3a, ERP components associated with increased attention to deviant sounds. The mismatch negativity (MMN) response, associated with passive deviant detection, was larger in the easy task, demonstrating the susceptibility of this component to attentional manipulations. The fMRI contrast D > S in the difficult task revealed activation on the right superior temporal gyrus (STG) and extending ventrally into the superior temporal sulcus, suggesting this region's involvement in involuntary attention shifting toward unattended, infrequent sounds. Conversely, passive deviance detection, as reflected by the MMN, was associated with more dorsal activation on the STG. These results are consistent with the view that the dorsal STG region is responsive to mismatches between the memory trace of the standard and the incoming deviant sound, whereas the ventral STG region is activated by involuntary shifts of attention to task-irrelevant auditory features.


2010 ◽  
Vol 22 (6) ◽  
pp. 1201-1211 ◽  
Author(s):  
Nadia Bolognini ◽  
Costanza Papagno ◽  
Daniela Moroni ◽  
Angelo Maravita

Perception of the outside world results from integration of information simultaneously derived via multiple senses. Increasing evidence suggests that the neural underpinnings of multisensory integration extend into the early stages of sensory processing. In the present study, we investigated whether the superior temporal gyrus (STG), an auditory modality-specific area, is critical for processing tactile events. Transcranial magnetic stimulation (TMS) was applied over the left STG and the left primary somatosensory cortex (SI) at different time intervals (60, 120, and 180 msec) during a tactile temporal discrimination task (Experiment 1) and a tactile spatial discrimination task (Experiment 2). Tactile temporal processing was disrupted when TMS was applied to SI at 60 msec after tactile presentation, confirming the modality specificity of this region. Crucially, TMS over STG also affected tactile temporal processing but at 180 msec delay. In both cases, the impairment was limited to the contralateral touches and was due to reduced perceptual sensitivity. In contrary, tactile spatial processing was impaired only by TMS over SI at 60–120 msec. These findings demonstrate the causal involvement of auditory areas in processing the duration of somatosensory events, suggesting that STG might play a supramodal role in temporal perception. Furthermore, the involvement of auditory cortex in somatosensory processing supports the view that multisensory integration occurs at an early stage of cortical processing.


2001 ◽  
Vol 15 (4) ◽  
pp. 221-240 ◽  
Author(s):  
Kent A. Kiehl ◽  
Kristin R. Laurens ◽  
Timothy L. Duty ◽  
Bruce B. Forster ◽  
Peter F. Liddle

Abstract Whole brain event-related functional magnetic resonance imaging (fMRI) techniques were employed to elucidate the cerebral sites involved in processing rare target and novel visual stimuli during an oddball discrimination task. The analyses of the hemodynamic response to the visual target stimuli revealed a distributed network of neural sources in anterior and posterior cingulate, inferior and middle frontal gyrus, bilateral parietal lobules, anterior superior temporal gyrus, amygdala, and thalamus. The analyses of the hemodynamic response for the visual novel stimuli revealed an extensive network of neural activations in occipital lobes and posterior temporal lobes, bilateral parietal lobules, and lateral frontal cortex. The hemodynamic response associated with processing target and novel stimuli in the visual modality were also compared with data from an analogous study in the auditory modality ( Kiehl et al., 2001 ). Similar patterns of activation were observed for target and novel stimuli in both modalities, but there were some significant differences. The results support the hypothesis that target detection and novelty processing are associated with neural activation in widespread neural areas, suggesting that the brain seems to adopt a strategy of activating many potentially useful brain regions despite the low probability that these brain regions are necessary for task performance.


2001 ◽  
Vol 15 (4) ◽  
pp. 256-274 ◽  
Author(s):  
Caterina Pesce ◽  
Rainer Bösel

Abstract In the present study we explored the focusing of visuospatial attention in subjects practicing and not practicing activities with high attentional demands. Similar to the studies of Castiello and Umiltà (e. g., 1990) , our experimental procedure was a variation of Posner's (1980) basic paradigm for exploring covert orienting of visuospatial attention. In a simple RT-task, a peripheral cue of varying size was presented unilaterally or bilaterally from a central fixation point and followed by a target at different stimulus-onset-asynchronies (SOAs). The target could occur validly inside the cue or invalidly outside the cue with varying spatial relation to its boundary. Event-related brain potentials (ERPs) and reaction times (RTs) were recorded to target stimuli under the different task conditions. RT and ERP findings showed converging aspects as well as dissociations. Electrophysiological results revealed an amplitude modulation of the ERPs in the early and late Nd time interval at both anterior and posterior scalp sites, which seems to be related to the effects of peripheral informative cues as well as to the attentional expertise. Results were: (1) shorter latency effects confirm the positive-going amplitude enhancement elicited by unilateral peripheral cues and strengthen the criticism against the neutrality of spatially nonpredictive peripheral cueing of all possible target locations which is often presumed in behavioral studies. (2) Longer latency effects show that subjects with attentional expertise modulate the distribution of the attentional resources in the visual space differently than nonexperienced subjects. Skilled practice may lead to minimizing attentional costs by automatizing the use of a span of attention that is adapted to the most frequent task demands and endogenously increases the allocation of resources to cope with less usual attending conditions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Candice Frances ◽  
Eugenia Navarra-Barindelli ◽  
Clara D. Martin

AbstractLanguage perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.


2019 ◽  
Vol 30 (4) ◽  
pp. 2542-2554 ◽  
Author(s):  
Maryam Ghaleh ◽  
Elizabeth H Lacey ◽  
Mackenzie E Fama ◽  
Zainab Anbari ◽  
Andrew T DeMarco ◽  
...  

Abstract Two maintenance mechanisms with separate neural systems have been suggested for verbal working memory: articulatory-rehearsal and non-articulatory maintenance. Although lesion data would be key to understanding the essential neural substrates of these systems, there is little evidence from lesion studies that the two proposed mechanisms crucially rely on different neuroanatomical substrates. We examined 39 healthy adults and 71 individuals with chronic left-hemisphere stroke to determine if verbal working memory tasks with varying demands would rely on dissociable brain structures. Multivariate lesion–symptom mapping was used to identify the brain regions involved in each task, controlling for spatial working memory scores. Maintenance of verbal information relied on distinct brain regions depending on task demands: sensorimotor cortex under higher demands and superior temporal gyrus (STG) under lower demands. Inferior parietal cortex and posterior STG were involved under both low and high demands. These results suggest that maintenance of auditory information preferentially relies on auditory-phonological storage in the STG via a nonarticulatory maintenance when demands are low. Under higher demands, sensorimotor regions are crucial for the articulatory rehearsal process, which reduces the reliance on STG for maintenance. Lesions to either of these regions impair maintenance of verbal information preferentially under the appropriate task conditions.


1999 ◽  
Vol 82 (5) ◽  
pp. 2346-2357 ◽  
Author(s):  
Mitchell Steinschneider ◽  
Igor O. Volkov ◽  
M. Daniel Noh ◽  
P. Charles Garell ◽  
Matthew A. Howard

Voice onset time (VOT) is an important parameter of speech that denotes the time interval between consonant onset and the onset of low-frequency periodicity generated by rhythmic vocal cord vibration. Voiced stop consonants (/b/, /g/, and /d/) in syllable initial position are characterized by short VOTs, whereas unvoiced stop consonants (/p/, /k/, and t/) contain prolonged VOTs. As the VOT is increased in incremental steps, perception rapidly changes from a voiced stop consonant to an unvoiced consonant at an interval of 20–40 ms. This abrupt change in consonant identification is an example of categorical speech perception and is a central feature of phonetic discrimination. This study tested the hypothesis that VOT is represented within auditory cortex by transient responses time-locked to consonant and voicing onset. Auditory evoked potentials (AEPs) elicited by stop consonant-vowel (CV) syllables were recorded directly from Heschl's gyrus, the planum temporale, and the superior temporal gyrus in three patients undergoing evaluation for surgical remediation of medically intractable epilepsy. Voiced CV syllables elicited a triphasic sequence of field potentials within Heschl's gyrus. AEPs evoked by unvoiced CV syllables contained additional response components time-locked to voicing onset. Syllables with a VOT of 40, 60, or 80 ms evoked components time-locked to consonant release and voicing onset. In contrast, the syllable with a VOT of 20 ms evoked a markedly diminished response to voicing onset and elicited an AEP very similar in morphology to that evoked by the syllable with a 0-ms VOT. Similar response features were observed in the AEPs evoked by click trains. In this case, there was a marked decrease in amplitude of the transient response to the second click in trains with interpulse intervals of 20–25 ms. Speech-evoked AEPs recorded from the posterior superior temporal gyrus lateral to Heschl's gyrus displayed comparable response features, whereas field potentials recorded from three locations in the planum temporale did not contain components time-locked to voicing onset. This study demonstrates that VOT at least partially is represented in primary and specific secondary auditory cortical fields by synchronized activity time-locked to consonant release and voicing onset. Furthermore, AEPs exhibit features that may facilitate categorical perception of stop consonants, and these response patterns appear to be based on temporal processing limitations within auditory cortex. Demonstrations of similar speech-evoked response patterns in animals support a role for these experimental models in clarifying selected features of speech encoding.


Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.


Sign in / Sign up

Export Citation Format

Share Document