Speech recognition of normal‐hearing and hearing‐impaired listeners in amplitude‐modulated noise

1994 ◽  
Vol 95 (5) ◽  
pp. 2992-2993
Author(s):  
Laurie S. Eisenberg ◽  
Donald D. Dirks ◽  
Theodore S. Bell
2013 ◽  
Vol 24 (04) ◽  
pp. 274-292 ◽  
Author(s):  
Van Summers ◽  
Matthew J. Makashay ◽  
Sarah M. Theodoroff ◽  
Marjorie R. Leek

Background: It is widely believed that suprathreshold distortions in auditory processing contribute to the speech recognition deficits experienced by hearing-impaired (HI) listeners in noise. Damage to outer hair cells and attendant reductions in peripheral compression and frequency selectivity may contribute to these deficits. In addition, reduced access to temporal fine structure (TFS) information in the speech waveform may play a role. Purpose: To examine how measures of peripheral compression, frequency selectivity, and TFS sensitivity relate to speech recognition performance by HI listeners. To determine whether distortions in processing reflected by these psychoacoustic measures are more closely associated with speech deficits in steady-state or modulated noise. Research Design: Normal-hearing (NH) and HI listeners were tested on tasks examining frequency selectivity (notched-noise task), peripheral compression (temporal masking curve task), and sensitivity to TFS information (frequency modulation [FM] detection task) in the presence of random amplitude modulation. Performance was tested at 500, 1000, 2000, and 4000 Hz at several presentation levels. The same listeners were tested on sentence recognition in steady-state and modulated noise at several signal-to-noise ratios. Study Sample: Ten NH and 18 HI listeners were tested. NH listeners ranged in age from 36 to 80 yr (M = 57.6). For HI listeners, ages ranged from 58 to 87 yr (M = 71.8). Results: Scores on the FM detection task at 1 and 2 kHz were significantly correlated with speech scores in both noise conditions. Frequency selectivity and compression measures were not as clearly associated with speech performance. Speech Intelligibility Index (SII) analyses indicated only small differences in speech audibility across subjects for each signal-to-noise ratio (SNR) condition that would predict differences in speech scores no greater than 10% at a given SNR. Actual speech scores varied by as much as 80% across subjects. Conclusions: The results suggest that distorted processing of audible speech cues was a primary factor accounting for differences in speech scores across subjects and that reduced ability to use TFS cues may be an important component of this distortion. The influence of TFS cues on speech scores was comparable in steady-state and modulated noise. Speech recognition was not related to audibility, represented by the SII, once high-frequency sensitivity differences across subjects (beginning at 5 kHz) were removed statistically. This might indicate that high-frequency hearing loss is associated with distortions in processing in lower-frequency regions.


1990 ◽  
Vol 88 (S1) ◽  
pp. S32-S32 ◽  
Author(s):  
Donald D. Dirks ◽  
Judy R. Dubno ◽  
Theodore S. Bell

1990 ◽  
Vol 33 (4) ◽  
pp. 726-735 ◽  
Author(s):  
Larry E. Humes ◽  
Lisa Roberts

The role that sensorineural hearing loss plays in the speech-recognition difficulties of the hearing-impaired elderly is examined. One approach to this issue was to make between-group comparisons of performance for three groups of subjects: (a) young normal-hearing adults; (b) elderly hearing-impaired adults; and (c) young normal-hearing adults with simulated sensorineural hearing loss equivalent to that of the elderly subjects produced by a spectrally shaped masking noise. Another approach to this issue employed correlational analyses to examine the relation between audibility and speech recognition within the group of elderly hearing-impaired subjects. An additional approach was pursued in which an acoustical index incorporating adjustments for threshold elevation was used to examine the role audibility played in the speech-recognition performance of the hearing-impaired elderly. A wide range of listening conditions was sampled in this experiment. The conclusion was that the primary determiner of speech-recognition performance in the elderly hearing-impaired subjects was their threshold elevation.


2020 ◽  
Vol 24 ◽  
pp. 233121652097001
Author(s):  
Jasper Ooster ◽  
Melanie Krueger ◽  
Jörg-Hendrik Bach ◽  
Kirsten C. Wagener ◽  
Birger Kollmeier ◽  
...  

Speech audiometry in noise based on sentence tests is an important diagnostic tool to assess listeners’ speech recognition threshold (SRT), i.e., the signal-to-noise ratio corresponding to 50% intelligibility. The clinical standard measurement procedure requires a professional experimenter to record and evaluate the response (expert-conducted speech audiometry). The use of automatic speech recognition enables self-conducted measurements with an easy-to-use speech-based interface. This article compares self-conducted SRT measurements using smart speakers with expert-conducted laboratory measurements. With smart speakers, there is no control over the absolute presentation level, potential errors from the automated response logging, and room acoustics. We investigate the differences between highly controlled measurements in the laboratory and smart speaker-based tests for young normal-hearing (NH) listeners as well as for elderly NH, mildly and moderately hearing-impaired listeners in low, medium, and highly reverberant room acoustics. For the smart speaker setup, we observe an overall bias in the SRT result that depends on the hearing loss. The bias ranges from +0.7 dB for elderly moderately hearing-impaired listeners to +2.2 dB for young NH listeners. The intrasubject standard deviation is close to the clinical standard deviation (0.57/0.69 dB for the young/elderly NH compared with 0.5 dB observed for clinical tests and 0.93/1.09 dB for the mild/moderate hearing-impaired listeners compared with 0.9 dB). For detecting a clinically elevated SRT, the speech-based test achieves an area under the curve value of 0.95 and therefore seems promising for complementing clinical measurements.


1995 ◽  
Vol 38 (1) ◽  
pp. 222-233 ◽  
Author(s):  
Laurie S. Eisenberg ◽  
Donald D. Dirks ◽  
Theodore S. Bell

The effect of amplitude-modulated (AM) noise on speech recognition in listeners with normal and impaired hearing was investigated in two experiments. In the first experiment, nonsense syllables were presented in high-pass steady-state or AM noise to determine whether the release from masking in AM noise relative to steady-state noise was significantly different between normal-hearing and hearing-impaired subjects when the two groups listened under equivalent masker conditions. The normal-hearing subjects were tested in the experimental noise under two conditions: (a) in a spectrally shaped broadband noise that produced pure tone thresholds equivalent to those of the hearing-impaired subjects, and (b) without the spectrally shaped broadband noise. The release from masking in AM noise was significantly greater for the normal-hearing group than for either the hearing-impaired or masked normal-hearing groups. In the second experiment, normal-hearing and hearing-impaired subjects identified nonsense syllables in isolation and target words in sentences in steady-state or AM noise adjusted to approximate the spectral shape and gain of a hearing aid prescription. The release from masking was significantly less for the subjects with impaired hearing. These data suggest that hearingimpaired listeners obtain less release from masking in AM noise than do normal-hearing listeners even when both the speech and noise are presented at levels that are above threshold over much of the speech frequency range.


1998 ◽  
Vol 41 (6) ◽  
pp. 1294-1306 ◽  
Author(s):  
Van Summers ◽  
Marjorie R. Leek

Normal-hearing and hearing-impaired listeners were tested to determine F0 difference limens for synthetic tokens of 5 steady-state vowels. The same stimuli were then used in a concurrent-vowel labeling task with the F0 difference between concurrent vowels ranging between 0 and 4 semitones. Finally, speech recognition was tested for synthetic sentences in the presence of a competing synthetic voice with the same, a higher, or a lower F0. Normal-hearing listeners and hearing-impaired listeners with small F0-discrimination (ΔF0) thresholds showed improvements in vowel labeling when there were differences in F0 between vowels on the concurrent-vowel task. Impaired listeners with high ΔF0 thresholds did not benefit from F0 differences between vowels. At the group level, normalhearing listeners benefited more than hearing-impaired listeners from F0 differences between competing signals on both the concurrent-vowel and sentence tasks. However, for individual listeners, ΔF0 thresholds and improvements in concurrent-vowel labeling based on F0 differences were only weakly associated with F0-based improvements in performance on the sentence task. For both the concurrent-vowel and sentence tasks, there was evidence that the ability to benefit from F0 differences between competing signals decreases with age.


Sign in / Sign up

Export Citation Format

Share Document