scholarly journals On The (Un)importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds

2016 ◽  
Vol 07 ◽  
Author(s):  
Christian Füllgrabe ◽  
Stuart Rosen
2017 ◽  
Vol 60 (11) ◽  
pp. 3342-3364 ◽  
Author(s):  
Susan Nittrouer ◽  
Amanda Caldwell-Tarr ◽  
Keri E. Low ◽  
Joanna H. Lowenstein

Purpose Verbal working memory in children with cochlear implants and children with normal hearing was examined. Participants Ninety-three fourth graders (47 with normal hearing, 46 with cochlear implants) participated, all of whom were in a longitudinal study and had working memory assessed 2 years earlier. Method A dual-component model of working memory was adopted, and a serial recall task measured storage and processing. Potential predictor variables were phonological awareness, vocabulary knowledge, nonverbal IQ, and several treatment variables. Potential dependent functions were literacy, expressive language, and speech-in-noise recognition. Results Children with cochlear implants showed deficits in storage and processing, similar in size to those at second grade. Predictors of verbal working memory differed across groups: Phonological awareness explained the most variance in children with normal hearing; vocabulary explained the most variance in children with cochlear implants. Treatment variables explained little of the variance. Where potentially dependent functions were concerned, verbal working memory accounted for little variance once the variance explained by other predictors was removed. Conclusions The verbal working memory deficits of children with cochlear implants arise due to signal degradation, which limits their abilities to acquire phonological awareness. That hinders their abilities to store items using a phonological code.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Emma Holmes ◽  
Timothy D. Griffiths

AbstractUnderstanding speech when background noise is present is a critical everyday task that varies widely among people. A key challenge is to understand why some people struggle with speech-in-noise perception, despite having clinically normal hearing. Here, we developed new figure-ground tests that require participants to extract a coherent tone pattern from a stochastic background of tones. These tests dissociated variability in speech-in-noise perception related to mechanisms for detecting static (same-frequency) patterns and those for tracking patterns that change frequency over time. In addition, elevated hearing thresholds that are widely considered to be ‘normal’ explained significant variance in speech-in-noise perception, independent of figure-ground perception. Overall, our results demonstrate that successful speech-in-noise perception is related to audiometric thresholds, fundamental grouping of static acoustic patterns, and tracking of acoustic sources that change in frequency. Crucially, speech-in-noise deficits are better assessed by measuring central (grouping) processes alongside audiometric thresholds.


2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Marina Saiz-Alía ◽  
Antonio Elia Forte ◽  
Tobias Reichenbach

Abstract People with normal hearing thresholds can nonetheless have difficulty with understanding speech in noisy backgrounds. The origins of such supra-threshold hearing deficits remain largely unclear. Previously we showed that the auditory brainstem response to running speech is modulated by selective attention, evidencing a subcortical mechanism that contributes to speech-in-noise comprehension. We observed, however, significant variation in the magnitude of the brainstem’s attentional modulation between the different volunteers. Here we show that this variability relates to the ability of the subjects to understand speech in background noise. In particular, we assessed 43 young human volunteers with normal hearing thresholds for their speech-in-noise comprehension. We also recorded their auditory brainstem responses to running speech when selectively attending to one of two competing voices. To control for potential peripheral hearing deficits, and in particular for cochlear synaptopathy, we further assessed noise exposure, the temporal sensitivity threshold, the middle-ear muscle reflex, and the auditory-brainstem response to clicks in various levels of background noise. These tests did not show evidence for cochlear synaptopathy amongst the volunteers. Furthermore, we found that only the attentional modulation of the brainstem response to speech was significantly related to speech-in-noise comprehension. Our results therefore evidence an impact of top-down modulation of brainstem activity on the variability in speech-in-noise comprehension amongst the subjects.


2019 ◽  
Author(s):  
Emma Holmes ◽  
Timothy D. Griffiths

AbstractUnderstanding speech when background noise is present is a critical everyday task that varies widely among people. A key challenge is to understand why some people struggle with speech-in-noise perception, despite having clinically normal hearing. Here, we developed new figure-ground tests that require participants to extract a coherent tone pattern from a stochastic background of tones. These tests dissociated variability in speech-in-noise perception related to mechanisms for detecting static (same-frequency) patterns and those for tracking patterns that change frequency over time. In addition, elevated hearing thresholds that are widely considered to be ‘normal’ explained significant variance in speech-in-noise perception, independent of figure-ground perception. Overall, our results demonstrate that successful speech-in-noise perception is related to audiometric thresholds, fundamental grouping of static acoustic patterns, and tracking of acoustic sources that change in frequency. Crucially, measuring both peripheral (audiometric thresholds) and central (grouping) processes is required to adequately assess speech-in-noise deficits.


2011 ◽  
Vol 7 (1) ◽  
pp. 8-14
Author(s):  
Robert Moore ◽  
Susan Gordon-Hickey

The purpose of this article is to propose 4 dimensions for consideration in hearing aid fittings and 4 tests to evaluate those dimensions. The 4 dimensions and tests are (a) working memory, evaluated by the Revised Speech Perception in Noise test (Bilger, Nuetzel, & Rabinowitz, 1984); (b) performance in noise, evaluated by the Quick Speech in Noise test (QSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004); (c) acceptance of noise, evaluated by the Acceptable Noise Level test (ANL; Nabelek, Tucker, & Letowski, 1991); and (d) performance versus perception, evaluated by the Perceptual–Performance test (PPT; Saunders & Cienkowski, 2002). The authors discuss the 4 dimensions and tests in the context of improving the quality of hearing aid fittings.


2021 ◽  
Author(s):  
Satyabrata Parida ◽  
Michael G. Heinz

SUMMARYListeners with sensorineural hearing loss (SNHL) struggle to understand speech, especially in noise, despite audibility compensation. These real-world suprathreshold deficits are hypothesized to arise from degraded frequency tuning and reduced temporal-coding precision; however, peripheral neurophysiological studies testing these hypotheses have been largely limited to in-quiet artificial vowels. Here, we measured single auditory-nerve-fiber responses to a natural speech sentence in noise from anesthetized chinchillas with normal hearing (NH) or noise-induced hearing loss (NIHL). Our results demonstrate that temporal precision was not degraded, and broader tuning was not the major factor affecting peripheral coding of natural speech in noise. Rather, the loss of cochlear tonotopy, a hallmark of normal hearing, had the most significant effects (both on vowels and consonants). Because distorted tonotopy varies in degree across etiologies (e.g., noise exposure, age), these results have important implications for understanding and treating individual differences in speech perception for people suffering from SNHL.


2021 ◽  
Vol 29 (2) ◽  
pp. 119-126
Author(s):  
Banu MÜJDECİ ◽  
Şule KAYA ◽  
Meltem TULĞAR ◽  
Kürşad KARAKOÇ ◽  
Mustafa KARABULUT ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document