scholarly journals ‘Normal’ hearing thresholds and fundamental auditory grouping processes predict difficulties with speech-in-noise perception

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Emma Holmes ◽  
Timothy D. Griffiths

AbstractUnderstanding speech when background noise is present is a critical everyday task that varies widely among people. A key challenge is to understand why some people struggle with speech-in-noise perception, despite having clinically normal hearing. Here, we developed new figure-ground tests that require participants to extract a coherent tone pattern from a stochastic background of tones. These tests dissociated variability in speech-in-noise perception related to mechanisms for detecting static (same-frequency) patterns and those for tracking patterns that change frequency over time. In addition, elevated hearing thresholds that are widely considered to be ‘normal’ explained significant variance in speech-in-noise perception, independent of figure-ground perception. Overall, our results demonstrate that successful speech-in-noise perception is related to audiometric thresholds, fundamental grouping of static acoustic patterns, and tracking of acoustic sources that change in frequency. Crucially, speech-in-noise deficits are better assessed by measuring central (grouping) processes alongside audiometric thresholds.

2019 ◽  
Author(s):  
Emma Holmes ◽  
Timothy D. Griffiths

AbstractUnderstanding speech when background noise is present is a critical everyday task that varies widely among people. A key challenge is to understand why some people struggle with speech-in-noise perception, despite having clinically normal hearing. Here, we developed new figure-ground tests that require participants to extract a coherent tone pattern from a stochastic background of tones. These tests dissociated variability in speech-in-noise perception related to mechanisms for detecting static (same-frequency) patterns and those for tracking patterns that change frequency over time. In addition, elevated hearing thresholds that are widely considered to be ‘normal’ explained significant variance in speech-in-noise perception, independent of figure-ground perception. Overall, our results demonstrate that successful speech-in-noise perception is related to audiometric thresholds, fundamental grouping of static acoustic patterns, and tracking of acoustic sources that change in frequency. Crucially, measuring both peripheral (audiometric thresholds) and central (grouping) processes is required to adequately assess speech-in-noise deficits.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Marina Saiz-Alía ◽  
Antonio Elia Forte ◽  
Tobias Reichenbach

Abstract People with normal hearing thresholds can nonetheless have difficulty with understanding speech in noisy backgrounds. The origins of such supra-threshold hearing deficits remain largely unclear. Previously we showed that the auditory brainstem response to running speech is modulated by selective attention, evidencing a subcortical mechanism that contributes to speech-in-noise comprehension. We observed, however, significant variation in the magnitude of the brainstem’s attentional modulation between the different volunteers. Here we show that this variability relates to the ability of the subjects to understand speech in background noise. In particular, we assessed 43 young human volunteers with normal hearing thresholds for their speech-in-noise comprehension. We also recorded their auditory brainstem responses to running speech when selectively attending to one of two competing voices. To control for potential peripheral hearing deficits, and in particular for cochlear synaptopathy, we further assessed noise exposure, the temporal sensitivity threshold, the middle-ear muscle reflex, and the auditory-brainstem response to clicks in various levels of background noise. These tests did not show evidence for cochlear synaptopathy amongst the volunteers. Furthermore, we found that only the attentional modulation of the brainstem response to speech was significantly related to speech-in-noise comprehension. Our results therefore evidence an impact of top-down modulation of brainstem activity on the variability in speech-in-noise comprehension amongst the subjects.


2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


2005 ◽  
Vol 16 (08) ◽  
pp. 574-584 ◽  
Author(s):  
Therese C. Walden ◽  
Brian E. Walden

This study compared unilateral and bilateral aided speech recognition in background noise in 28 patients being fitted with amplification. Aided QuickSIN (Quick Speech-in-Noise test) scores were obtained for bilateral amplification and for unilateral amplification in each ear. In addition, right-ear directed and left-ear directed recall on the Dichotic Digits Test (DDT) was obtained from each participant. Results revealed that the vast majority of patients obtained better speech recognition in background noise on the QuickSIN from unilateral amplification than from bilateral amplification. There was a greater tendency for bilateral amplification to have a deleterious effect among older patients. Most frequently, better aided QuickSIN performance was obtained in the right ear of participants, despite similar hearing thresholds in both ears. Finally, patients tended to perform better on the DDT in the ear that provided less SNR loss on the QuickSIN. Results suggest that bilateral amplification may not always be beneficial in every daily listening environment when background noise is present, and it may be advisable for patients wearing bilateral amplification to remove one hearing aid when difficulty is encountered understanding speech in background noise.


2019 ◽  
Vol 122 (6) ◽  
pp. 2372-2387 ◽  
Author(s):  
Peng Zan ◽  
Alessandro Presacco ◽  
Samira Anderson ◽  
Jonathan Z. Simon

Younger adults with normal hearing can typically understand speech in the presence of a competing speaker without much effort, but this ability to understand speech in challenging conditions deteriorates with age. Older adults, even with clinically normal hearing, often have problems understanding speech in noise. Earlier auditory studies using the frequency-following response (FFR), primarily believed to be generated by the midbrain, demonstrated age-related neural deficits when analyzed with traditional measures. Here we use a mutual information paradigm to analyze the FFR to speech (masked by a competing speech signal) by estimating the amount of stimulus information contained in the FFR. Our results show, first, a broadband informational loss associated with aging for both FFR amplitude and phase. Second, this age-related loss of information is more severe in higher-frequency FFR bands (several hundred hertz). Third, the mutual information between the FFR and the stimulus decreases as noise level increases for both age groups. Fourth, older adults benefit neurally, i.e., show a reduction in loss of information, when the speech masker is changed from meaningful (talker speaking a language that they can comprehend, such as English) to meaningless (talker speaking a language that they cannot comprehend, such as Dutch). This benefit is not seen in younger listeners, which suggests that age-related informational loss may be more severe when the speech masker is meaningful than when it is meaningless. In summary, as a method, mutual information analysis can unveil new results that traditional measures may not have enough statistical power to assess. NEW & NOTEWORTHY Older adults, even with clinically normal hearing, often have problems understanding speech in noise. Auditory studies using the frequency-following response (FFR) have demonstrated age-related neural deficits with traditional methods. Here we use a mutual information paradigm to analyze the FFR to speech masked by competing speech. Results confirm those from traditional analysis but additionally show that older adults benefit neurally when the masker changes from a language that they comprehend to a language they cannot.


2017 ◽  
Vol 28 (08) ◽  
pp. 679-684
Author(s):  
Jay R. Lucker

AbstractMany audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms. The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school. A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations.Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced.Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth.No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done, that is, in the booth or in the room.Results from the present study indicate that one can obtain the same results on auditory processing tests, regardless of whether testing is completed in a soundproof booth or in a quiet test environment. Therefore, audiologists should not be required to test for auditory processing in a soundproof booth. This study shows that audiologists can conduct testing in a quiet room so long as the background noise is sufficiently controlled.


2019 ◽  
Author(s):  
Emma Holmes ◽  
Peter Zeidman ◽  
Karl J. Friston ◽  
Timothy D. Griffiths

AbstractIn our everyday lives, we are often required to follow a conversation when background noise is present (“speech-in-noise” perception). Speech-in-noise perception varies widely—and people who are worse at speech-in-noise perception are also worse at fundamental auditory grouping, as assessed by figure-ground tasks. Here, we examined the cortical processes that link difficulties with speech-in-noise perception to difficulties with figure-ground perception using functional magnetic resonance imaging (fMRI). We found strong evidence that the earliest stages of the auditory cortical hierarchy (left core and belt areas) are similarly disinhibited when speech-in-noise and figure-ground tasks are more difficult (i.e., at target-to-masker ratios corresponding to 60% than 90% thresholds)—consistent with increased cortical gain at lower levels of the auditory hierarchy. Overall, our results reveal a common neural substrate for these basic (figure-ground) and naturally relevant (speech-in-noise) tasks—which provides a common computational basis for the link between speech-in-noise perception and fundamental auditory grouping.


2010 ◽  
Vol 104 (6) ◽  
pp. 3361-3370 ◽  
Author(s):  
Jianwen Wendy Gu ◽  
Christopher F. Halpin ◽  
Eui-Cheol Nam ◽  
Robert A. Levine ◽  
Jennifer R. Melcher

Phantom sensations and sensory hypersensitivity are disordered perceptions that characterize a variety of intractable conditions involving the somatosensory, visual, and auditory modalities. We report physiological correlates of two perceptual abnormalities in the auditory domain: tinnitus, the phantom perception of sound, and hyperacusis, a decreased tolerance of sound based on loudness. Here, subjects with and without tinnitus, all with clinically normal hearing thresholds, underwent 1) behavioral testing to assess sound-level tolerance and 2) functional MRI to measure sound-evoked activation of central auditory centers. Despite receiving identical sound stimulation levels, subjects with diminished sound-level tolerance (i.e., hyperacusis) showed elevated activation in the auditory midbrain, thalamus, and primary auditory cortex compared with subjects with normal tolerance. Primary auditory cortex, but not subcortical centers, showed elevated activation specifically related to tinnitus. The results directly link hyperacusis and tinnitus to hyperactivity within the central auditory system. We hypothesize that the tinnitus-related elevations in cortical activation may reflect undue attention drawn to the auditory domain, an interpretation consistent with the lack of tinnitus-related effects subcortically where activation is less potently modulated by attentional state. The data strengthen, at a mechanistic level, analogies drawn previously between tinnitus/hyperacusis and other, nonauditory disordered perceptions thought to arise from neural hyperactivity such as chronic neuropathic pain and photophobia.


2020 ◽  
Author(s):  
Emma Holmes ◽  
Peter Zeidman ◽  
Karl J Friston ◽  
Timothy D Griffiths

Abstract In our everyday lives, we are often required to follow a conversation when background noise is present (“speech-in-noise” [SPIN] perception). SPIN perception varies widely—and people who are worse at SPIN perception are also worse at fundamental auditory grouping, as assessed by figure-ground tasks. Here, we examined the cortical processes that link difficulties with SPIN perception to difficulties with figure-ground perception using functional magnetic resonance imaging. We found strong evidence that the earliest stages of the auditory cortical hierarchy (left core and belt areas) are similarly disinhibited when SPIN and figure-ground tasks are more difficult (i.e., at target-to-masker ratios corresponding to 60% rather than 90% performance)—consistent with increased cortical gain at lower levels of the auditory hierarchy. Overall, our results reveal a common neural substrate for these basic (figure-ground) and naturally relevant (SPIN) tasks—which provides a common computational basis for the link between SPIN perception and fundamental auditory grouping.


Sign in / Sign up

Export Citation Format

Share Document