scholarly journals Neurofeedback Training of Auditory Selective Attention Enhances Speech-In-Noise Perception

2021 ◽  
Vol 15 ◽  
Author(s):  
Subong Kim ◽  
Caroline Emory ◽  
Inyong Choi

Selective attention enhances cortical responses to attended sensory inputs while suppressing others, which can be an effective strategy for speech-in-noise (SiN) understanding. Emerging evidence exhibits a large variance in attentional control during SiN tasks, even among normal-hearing listeners. Yet whether training can enhance the efficacy of attentional control and, if so, whether the training effects can be transferred to performance on a SiN task has not been explicitly studied. Here, we introduce a neurofeedback training paradigm designed to reinforce the attentional modulation of auditory evoked responses. Young normal-hearing adults attended one of two competing speech streams consisting of five repeating words (“up”) in a straight rhythm spoken by a female speaker and four straight words (“down”) spoken by a male speaker. Our electroencephalography-based attention decoder classified every single trial using a template-matching method based on pre-defined patterns of cortical auditory responses elicited by either an “up” or “down” stream. The result of decoding was provided on the screen as online feedback. After four sessions of this neurofeedback training over 4 weeks, the subjects exhibited improved attentional modulation of evoked responses to the training stimuli as well as enhanced cortical responses to target speech and better performance during a post-training SiN task. Such training effects were not found in the Placebo Group that underwent similar attention training except that feedback was given only based on behavioral accuracy. These results indicate that the neurofeedback training may reinforce the strength of attentional modulation, which likely improves SiN understanding. Our finding suggests a potential rehabilitation strategy for SiN deficits.

2020 ◽  
Author(s):  
Subong Kim ◽  
Caroline Emory ◽  
Inyong Choi

AbstractSelective attention enhances cortical responses to attended sensory inputs while suppressing others, which can be an effective strategy for speech-in-noise (SiN) understanding. Here, we introduce a training paradigm designed to reinforce attentional modulation of auditory evoked responses. Subjects attended one of two speech streams while our EEG-based attention decoder provided online feedback. After four weeks of this neurofeedback training, subjects exhibited enhanced cortical response to target speech and improved performance during a SiN task. Such training effects were not found in the Placebo group that underwent attention training without neurofeedback. These results suggest an effective rehabilitation for SiN deficits.


2018 ◽  
Vol 115 (14) ◽  
pp. E3286-E3295 ◽  
Author(s):  
Lengshi Dai ◽  
Virginia Best ◽  
Barbara G. Shinn-Cunningham

Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Marina Saiz-Alía ◽  
Antonio Elia Forte ◽  
Tobias Reichenbach

Abstract People with normal hearing thresholds can nonetheless have difficulty with understanding speech in noisy backgrounds. The origins of such supra-threshold hearing deficits remain largely unclear. Previously we showed that the auditory brainstem response to running speech is modulated by selective attention, evidencing a subcortical mechanism that contributes to speech-in-noise comprehension. We observed, however, significant variation in the magnitude of the brainstem’s attentional modulation between the different volunteers. Here we show that this variability relates to the ability of the subjects to understand speech in background noise. In particular, we assessed 43 young human volunteers with normal hearing thresholds for their speech-in-noise comprehension. We also recorded their auditory brainstem responses to running speech when selectively attending to one of two competing voices. To control for potential peripheral hearing deficits, and in particular for cochlear synaptopathy, we further assessed noise exposure, the temporal sensitivity threshold, the middle-ear muscle reflex, and the auditory-brainstem response to clicks in various levels of background noise. These tests did not show evidence for cochlear synaptopathy amongst the volunteers. Furthermore, we found that only the attentional modulation of the brainstem response to speech was significantly related to speech-in-noise comprehension. Our results therefore evidence an impact of top-down modulation of brainstem activity on the variability in speech-in-noise comprehension amongst the subjects.


2017 ◽  
Author(s):  
Alessandro Presacco ◽  
Jonathan Z. Simon ◽  
Samira Anderson

ABSTRACTObjectiveTo understand the effect of peripheral hearing loss on the representation of speech in noise in the aging midbrain and cortex.MethodsSubjects comprised 17 normal-hearing younger adults, 15 normal-hearing older adults and 14 hearing-impaired older adults. The midbrain response, measured with Frequency-Following Responses (FFRs), and the cortical response, measured with magnetoencephalography (MEG) responses, were recorded from subjects listening to speech in quiet and noise at varying signal to noise ratios (SNRs).ResultsBoth groups of older listeners showed both weaker midbrain response amplitudes and overrepresentation of cortical responses compared to younger listeners. However, significant differences between the older groups were found in both midbrain-cortex relationships and in cortical processing durations, suggesting that hearing loss may alter reciprocal connections between lower and higher levels of the auditory pathway.ConclusionsThe paucity of differences in midbrain or cortical responses between the two older groups suggest that age-related temporal processing deficits may contribute to older adults’ communication difficulties beyond what might be predicted from peripheral hearing loss alone.SignificanceClinical devices, such as hearing aids, should not ignore age-related temporal processing deficits in the design of algorithms to maximize user benefit.HIGHLIGHTSMild sensorineural hearing loss does not appear to significantly exacerbate already appreciable age-related deficits in midbrain speech-in-noise encoding.Mild sensorineural hearing loss also does not appear to significantly exacerbate already appreciable age-related deficits in most measures of cortical speech-in-noise encoding.Central processing deficits caused by peripheral hearing loss in older adults are seen only in more subtle measures, including altered relationships between midbrain and cortex.


1969 ◽  
Vol 12 (2) ◽  
pp. 394-401 ◽  
Author(s):  
Paul Skinner ◽  
Frank Antinoro

Averaged evoked responses (AER) to auditory stimuli presented to young children and adults were compared between awake and induced sleep conditions. Eight adults and twenty preschool children with normal hearing were tested before and during sedation at two suprathreshold levels with tone pips centered at 510, 1020, and 2040 Hz. Responses obtained during sedation assumed a distinctly different wave complex than those obtained under the awake condition. The P2 peak that is most prominent in the AERs obtained from awake subjects was diminished considerably under sedation and P3 became the prominent peak. Moreover, the P3 peaks in the AERs obtained under sedation were of considerably greater amplitude than the P2 peaks obtained in the awake condition. In all cases where responses were obtained from awake subjects, greater amplitude responses were obtained during sedation. The use of sedation with the preschool children proved to be most important in obtaining more detectable responses and permitting evoked potential audiometry with otherwise unmanageable children.


2021 ◽  
Author(s):  
Satyabrata Parida ◽  
Michael G. Heinz

SUMMARYListeners with sensorineural hearing loss (SNHL) struggle to understand speech, especially in noise, despite audibility compensation. These real-world suprathreshold deficits are hypothesized to arise from degraded frequency tuning and reduced temporal-coding precision; however, peripheral neurophysiological studies testing these hypotheses have been largely limited to in-quiet artificial vowels. Here, we measured single auditory-nerve-fiber responses to a natural speech sentence in noise from anesthetized chinchillas with normal hearing (NH) or noise-induced hearing loss (NIHL). Our results demonstrate that temporal precision was not degraded, and broader tuning was not the major factor affecting peripheral coding of natural speech in noise. Rather, the loss of cochlear tonotopy, a hallmark of normal hearing, had the most significant effects (both on vowels and consonants). Because distorted tonotopy varies in degree across etiologies (e.g., noise exposure, age), these results have important implications for understanding and treating individual differences in speech perception for people suffering from SNHL.


2017 ◽  
Vol 39 (5) ◽  
pp. 430-441 ◽  
Author(s):  
Julia M. Stephen ◽  
Dina E. Hill ◽  
Amanda Peters ◽  
Lucinda Flynn ◽  
Tongsheng Zhang ◽  
...  

The cortical responses to auditory stimuli undergo rapid and dramatic changes during the first 3 years of life in normally developing (ND) children, with decreases in latency and changes in amplitude in the primary peaks. However, most previous studies have focused on children >3 years of age. The analysis of data from the early stages of development is challenging because the temporal pattern of the evoked responses changes with age (e.g., additional peaks emerge with increasing age) and peak latency decreases with age. This study used the topography of the auditory evoked magnetic field (AEF) to identify the auditory components in ND children between 6 and 68 months (n = 48). The latencies of the peaks in the AEF produced by a tone burst (ISI 2 ± 0.2 s) during sleep decreased with age, consistent with previous reports in awake children. The peak latencies of the AEFs in ND children and children with autism spectrum disorder (ASD) were compared. Previous studies indicate that the latencies of the initial components of the auditory evoked potential (AEP) and the AEF are delayed in children with ASD when compared to age-matched ND children >4 years of age. We speculated whether the AEF latencies decrease with age in children diagnosed with ASD as in ND children, but with uniformly longer latencies before the age of about 4 years. Contrary to this hypothesis, the peak latencies did not decrease with age in the ASD group (24-62 months, n = 16) during sleep (unlike in the age-matched controls), although the mean latencies were longer in the ASD group as in previous studies. These results are consistent with previous studies indicating delays in auditory latencies, and they indicate a different maturational pattern in ASD children and ND children. Longitudinal studies are needed to confirm whether the AEF latencies diverge with age, starting at around 3 years, in these 2 groups of children.


Sign in / Sign up

Export Citation Format

Share Document