The effect of visual stimuli on auditory detection in a auditory attention task

Author(s):  
Jingjing Yang ◽  
Xiujun Li ◽  
Qi Li ◽  
Xinwei Xiao ◽  
Qiong Wu ◽  
...  
1994 ◽  
Vol 78 (3_suppl) ◽  
pp. 1153-1154 ◽  
Author(s):  
Claire F. Taub ◽  
Elaine Fine ◽  
Rochelle S. Cherry

Data from 3 boys indicate that a selective auditory attention task may be useful in identifying prereading children who are at risk for learning disabilities.


2014 ◽  
Vol 25 (3) ◽  
pp. 143-152 ◽  
Author(s):  
Thomas Günther ◽  
Kerstin Konrad ◽  
Joachim Häusler ◽  
Hafida Saghraoui ◽  
Klaus Willmes ◽  
...  

The purpose of this cross-sectional study was to compare performance on visual and auditory attention tasks along with the developmental trajectories of these systems. Participants between 7 and 77 years of age were examined: 490 subjects (229 males and 261 females) completed the visual and auditory part of a focused-attention task, and 688 subjects (320 males and 368 females) were tested with an alertness task in the two different modalities. Shorter reaction times were observed in the visual condition compared to the auditory condition. This difference was particularly large for children and for the more complex, focused-attention task. However, the gap between the two modalities decreased with age, resulting in significant interaction effects between age and modality for both attention tasks. Attentional performance increased with age, and maximum performance was achieved in early adulthood. For nearly all performance variables, no decrease could be detected with increasing age. In addition, the results of a principal components analysis suggest that, independent of modality, all alertness variables load on one component, whereas the performance variables of the visual and the auditory focused-attention task load on two separate components. Thus, our data suggest that visual and auditory attention rely on distinct attentional systems within the selectivity domain of attention and have distinct developmental trajectories.


Author(s):  
Anna Soveri ◽  
Jussi Tallus ◽  
Matti Laine ◽  
Lars Nyberg ◽  
Lars Bäckman ◽  
...  

We studied the effects of training on auditory attention in healthy adults with a speech perception task involving dichotically presented syllables. Training involved bottom-up manipulation (facilitating responses from the harder-to-report left ear through a decrease of right-ear stimulus intensity), top-down manipulation (focusing attention on the left-ear stimuli through instruction), or their combination. The results showed significant training-related effects for top-down training. These effects were evident as higher overall accuracy rates in the forced-left dichotic listening (DL) condition that sets demands on attentional control, as well as a response shift toward left-sided reports in the standard DL task. Moreover, a transfer effect was observed in an untrained auditory-spatial attention task involving bilateral stimulation where top-down training led to a relatively stronger focus on left-sided stimuli. Our results indicate that training of attentional control can modulate the allocation of attention in the auditory space in adults. Malleability of auditory attention in healthy adults raises the issue of potential training gains in individuals with attentional deficits.


1994 ◽  
Vol 78 (2) ◽  
pp. 563-570 ◽  
Author(s):  
Lena Linde

On an auditory attention task subjects were required to reproduce spatial relationships between letters from auditorily presented verbal information containing the prepositions “before” or “after.” It was assumed that propositions containing “after” induce a conflict between temporal, and semantically implied, spatial order between letters. Data from 36 subjects showing that propositions with “after” are more difficult to process are presented. A significant, general training effect appeared. 200 mg caffeine had a certain beneficial effect on performance of 18 subjects who had been awake for about 22 hours and were tested at 6 a.m.; however, the beneficial effect was not related to amount of conflict but concerned items without and with conflict. On the other hand, the effect of caffeine for 18 subjects tested at 4 p.m. after normal sleep was slightly negative.


1998 ◽  
Vol 10 (2) ◽  
pp. 231-247 ◽  
Author(s):  
Nobuyuki Nishitani ◽  
Takashi Nagamine ◽  
Naohito Fujiwara ◽  
Shogo Yazawa ◽  
Hiroshi Shibasaki

We recorded magnetic and electrical responses simultaneously in an auditory detection task to elucidate the brain areas involved in auditory processing. Target stimuli evoked magnetic fields peaking at approximately the same latency of around about 400 msec (M400) over the anterior temporal, superior temporal, and parietal regions on each hemisphere. Equivalent current dipoles (ECDs) were analyzed with a time-varying multidipole model and superimposed on each subject's magnetic resonance image (MRI). Multiple independent dipoles located in the superior temporal plane, inferior parietal lobe, and mesial temporal region best accounted for the recorded M400 fields. These findings suggest that distributed activity in multiple structures including the mesial temporal, superior temporal, and inferior parietal regions on both hemispheres is engaged during auditory attention and memory updating.


2000 ◽  
Vol 90 (2) ◽  
pp. 631-639 ◽  
Author(s):  
Gordon W. Blood ◽  
Ingrid M. Blood ◽  
Glen Tellis

This study examined the differences among scores on four tests of auditory processing of 6 children who clutter and 6 control subjects matched for age. sex, and grade. Scores on a consonant-vowel dichotic listening task indicated that directing the attention of the attended ear improved the percentage of correct responses for both groups of children. Those who clutter, however, showed a greater percentage of change during the directed right and left ear conditions. Cluttering children performed poorer on right and left competing conditions of the Staggered Spondaic Word Test. No differences were found between groups for the auditory attention task and the time-compressed speech task. Implications for processing of dichotic stimuli and diagnosis of children who clutter are discussed.


2020 ◽  
Author(s):  
Christina Hanenberg ◽  
Michael-Christian Schlüter ◽  
Stephan Getzmann ◽  
Jörg Lewald

AbstractAudiovisual cross-modal training has been proposed as a tool to improve human spatial hearing. Here, we investigated training-induced modulations of auditory-evoked event-related potential (ERP) components that have been associated with processes of auditory selective spatial attention when a speaker of interest has to be localized in a multiple speaker (“cocktail-party”) scenario. Forty-five healthy subjects were tested, including younger (19-29 yrs; n = 21) and older (66-76 yrs; n = 24) age groups. Three conditions of short-term training (duration 15 minutes) were compared, requiring localization of non-speech targets under “cocktail-party” conditions with either (1) synchronous presentation of co-localized auditory-target and visual stimuli (audiovisual-congruency training) or (2) immediate visual feedback on correct or incorrect localization responses (visual-feedback training), or (3) presentation of spatially incongruent auditory-target and visual stimuli presented at random positions with synchronous onset (control condition). Prior to and after training, subjects were tested in an auditory spatial attention task (15 minutes), requiring localization of a predefined spoken word out of three distractor words, which were presented with synchronous stimulus onset from different positions. Peaks of ERP components were analyzed with a specific focus on the N2, which is known to be a correlate of auditory selective spatial attention. N2 amplitudes were significantly larger after audiovisual-congruency training compared with the remaining training conditions for younger, but not older, subjects. Also, at the time of the N2, electrical imaging revealed an enhancement of electrical activity induced by audiovisual-congruency training in dorsolateral prefrontal cortex (Brodmann area 9) for the younger group. These findings suggest that cross-modal processes induced by audiovisual-congruency training under “cocktail-party” conditions at a short time scale resulted in an enhancement of correlates of auditory selective spatial attention.


2018 ◽  
Vol 115 (14) ◽  
pp. E3286-E3295 ◽  
Author(s):  
Lengshi Dai ◽  
Virginia Best ◽  
Barbara G. Shinn-Cunningham

Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing.


2009 ◽  
Vol 454 (3) ◽  
pp. 171-175 ◽  
Author(s):  
Heidi van Wageningen ◽  
Hugo A. Jørgensen ◽  
Karsten Specht ◽  
Kenneth Hugdahl

Sign in / Sign up

Export Citation Format

Share Document