Cross-frequency weights in normal and impaired hearing: Stimulus factors, stimulus dimensions, and associations with speech recognition

2021 ◽  
Vol 150 (4) ◽  
pp. 2327-2349
Author(s):  
Elin Roverud ◽  
Judy R. Dubno ◽  
Virginia M. Richards ◽  
Gerald Kidd
2002 ◽  
Vol 45 (6) ◽  
pp. 1262-1275 ◽  
Author(s):  
Eric W. Healy ◽  
Sid P. Bacon

Listeners with normal hearing (NH) and with sensorineural hearing impairment (HI) were tested on a speech-recognition task requiring across-frequency integration of temporal speech information. Listeners with NH correctly identified a majority of key words in everyday sentences when presented with a synchronous pair of speech-modulated tones at 750 and 3000 Hz. They could tolerate small amounts (12.5 ms) of across-frequency asynchrony, but performance fell as the delay between bands was increased to 100 ms. Listeners with HI performed more poorly than those with NH when presented with synchronous across-frequency information. Further, performance of listeners with HI fell as a function of asynchrony more steeply than that of their NH counterparts. These results suggest that listeners with HI have particular difficulty comparing and effectively processing temporal speech information at different frequencies. The increased influence of asynchrony indicates that these listeners are especially hindered by slight disruptions in across-frequency information, which implies a less robust comparison mechanism. The results could not be attributed to differences in signal or sensation level, or in listener age, but instead appear to be related to the degree of hearing loss. This across-frequency deficit is unlikely to be attributed to known processing difficulties and may exist in addition to other known disruptions.


2005 ◽  
Vol 16 (08) ◽  
pp. 574-584 ◽  
Author(s):  
Therese C. Walden ◽  
Brian E. Walden

This study compared unilateral and bilateral aided speech recognition in background noise in 28 patients being fitted with amplification. Aided QuickSIN (Quick Speech-in-Noise test) scores were obtained for bilateral amplification and for unilateral amplification in each ear. In addition, right-ear directed and left-ear directed recall on the Dichotic Digits Test (DDT) was obtained from each participant. Results revealed that the vast majority of patients obtained better speech recognition in background noise on the QuickSIN from unilateral amplification than from bilateral amplification. There was a greater tendency for bilateral amplification to have a deleterious effect among older patients. Most frequently, better aided QuickSIN performance was obtained in the right ear of participants, despite similar hearing thresholds in both ears. Finally, patients tended to perform better on the DDT in the ear that provided less SNR loss on the QuickSIN. Results suggest that bilateral amplification may not always be beneficial in every daily listening environment when background noise is present, and it may be advisable for patients wearing bilateral amplification to remove one hearing aid when difficulty is encountered understanding speech in background noise.


2020 ◽  
Vol 41 (04) ◽  
pp. 291-301
Author(s):  
Stephanie Tittle ◽  
Linda M. Thibodeau ◽  
Issa Panahi ◽  
Serkan Tokgoz ◽  
Nikhil Shankar ◽  
...  

AbstractAs part of a National Institutes of Health–National Institute on Deafness and Other communication Disorders (NIH-NIDCD)–supported project to develop open-source research and smartphone-based apps for enhancing speech recognition in noise, an app called Smartphone Hearing Aid Research Project Version 2 (SHARP-2) was tested with persons with normal and impaired hearing when using three sets of hearing aids (HAs) with wireless connectivity to an iPhone. Participants were asked to type sentences presented from a speaker in front of them while hearing noise from behind in two conditions, HA alone and HA + SHARP-2 app running on the iPhone. The signal was presented at a constant level of 65 dBA and the signal-to-noise ratio varied from −10 to +10, so that the task was difficult when listening through the bilateral HAs alone. This was important to allow for improvement to be measured when the HAs were connected to the SHARP-2 app on the smartphone. Benefit was achieved for most listeners with all three manufacturer HAs with the greatest improvements recorded for persons with normal (33.56%) and impaired hearing (22.21%) when using the SHARP-2 app with one manufacturer's made-for-all phones HAs. These results support the continued development of smartphone-based apps as an economical solution for enhancing speech recognition in noise for both persons with normal and impaired hearing.


1995 ◽  
Vol 38 (1) ◽  
pp. 222-233 ◽  
Author(s):  
Laurie S. Eisenberg ◽  
Donald D. Dirks ◽  
Theodore S. Bell

The effect of amplitude-modulated (AM) noise on speech recognition in listeners with normal and impaired hearing was investigated in two experiments. In the first experiment, nonsense syllables were presented in high-pass steady-state or AM noise to determine whether the release from masking in AM noise relative to steady-state noise was significantly different between normal-hearing and hearing-impaired subjects when the two groups listened under equivalent masker conditions. The normal-hearing subjects were tested in the experimental noise under two conditions: (a) in a spectrally shaped broadband noise that produced pure tone thresholds equivalent to those of the hearing-impaired subjects, and (b) without the spectrally shaped broadband noise. The release from masking in AM noise was significantly greater for the normal-hearing group than for either the hearing-impaired or masked normal-hearing groups. In the second experiment, normal-hearing and hearing-impaired subjects identified nonsense syllables in isolation and target words in sentences in steady-state or AM noise adjusted to approximate the spectral shape and gain of a hearing aid prescription. The release from masking was significantly less for the subjects with impaired hearing. These data suggest that hearingimpaired listeners obtain less release from masking in AM noise than do normal-hearing listeners even when both the speech and noise are presented at levels that are above threshold over much of the speech frequency range.


2020 ◽  
Vol 24 ◽  
pp. 233121652093892
Author(s):  
Marc R. Schädler ◽  
David Hülsmeier ◽  
Anna Warzybok ◽  
Birger Kollmeier

The benefit in speech-recognition performance due to the compensation of a hearing loss can vary between listeners, even if unaided performance and hearing thresholds are similar. To accurately predict the individual performance benefit due to a specific hearing device, a prediction model is proposed which takes into account hearing thresholds and a frequency-dependent suprathreshold component of impaired hearing. To test the model, the German matrix sentence test was performed in unaided and individually aided conditions in quiet and in noise by 18 listeners with different degrees of hearing loss. The outcomes were predicted by an individualized automatic speech-recognition system where the individualization parameter for the suprathreshold component of hearing loss was inferred from tone-in-noise detection thresholds. The suprathreshold component was implemented as a frequency-dependent multiplicative noise (mimicking level uncertainty) in the feature-extraction stage of the automatic speech-recognition system. Its inclusion improved the root-mean-square prediction error of individual speech-recognition thresholds (SRTs) from 6.3 dB to 4.2 dB and of individual benefits in SRT due to common compensation strategies from 5.1 dB to 3.4 dB. The outcome predictions are highly correlated with both the corresponding observed SRTs ( R2 = .94) and the benefits in SRT ( R2 = .89) and hence might help to better understand—and eventually mitigate—the perceptual consequences of as yet unexplained hearing problems, also discussed in the context of hidden hearing loss.


Sign in / Sign up

Export Citation Format

Share Document