Frequency resolution and phoneme recognition by hearing‐impaired listeners

1996 ◽  
Vol 100 (4) ◽  
pp. 2631-2631 ◽  
Author(s):  
Dianne J. Van Tasell ◽  
Bart R. Clement ◽  
Anna C. Schroder ◽  
David A. Nelson
1999 ◽  
Vol 42 (4) ◽  
pp. 773-784 ◽  
Author(s):  
Christopher W. Turner ◽  
Siu-Ling Chi ◽  
Sarah Flock

Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.


2005 ◽  
Vol 48 (4) ◽  
pp. 910-921 ◽  
Author(s):  
Laura E. Dreisbach ◽  
Marjorie R. Leek ◽  
Jennifer J. Lentz

The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and an accompanying reduction in frequency resolution. This study examined the discrimination by hearing-impaired listeners of highly similar harmonic complexes with a single spectral peak located in 1 of 3 frequency regions. The minimum level difference between peak and background harmonics required to discriminate a small change in the spectral center of the peak was measured for peaks located near 2, 3, or 4 kHz. Component phases were selected according to an algorithm thought to produce either highly modulated (positive Schroeder) or very flat (negative Schroeder) internal waveform envelopes in the cochlea. The mean amplitude difference between a spectral peak and the background components required for discrimination of pairs of harmonic complexes (spectral contrast threshold) was from 4 to 19 dB greater for listeners with hearing impairment than for a control group of listeners with normal hearing. In normal-hearing listeners, improvements in threshold were seen with increasing stimulus level, and there was a strong effect of stimulus phase, as the positive Schroeder stimuli always produced lower thresholds than the negative Schroeder stimuli. The listeners with hearing loss showed no consistent spectral contrast effects due to stimulus phase and also showed little improvement with increasing stimulus level, once their sensitivity loss was overcome. The lack of phase and level effects may be a result of the more linear processing occurring in impaired ears, producing poorer-than-normal frequency resolution, a loss of gain for low amplitudes, and an altered cochlear phase characteristic in regions of damage.


2013 ◽  
Vol 24 (04) ◽  
pp. 258-273 ◽  
Author(s):  
Ken W. Grant ◽  
Therese C. Walden

Background: Traditional audiometric measures, such as pure-tone thresholds or unaided word-recognition in quiet, appear to be of marginal use in predicting speech understanding by hearing-impaired (HI) individuals in background noise with or without amplification. Suprathreshold measures of auditory function (tolerance of noise, temporal and frequency resolution) appear to contribute more to success with amplification and may describe more effectively the distortion component of hearing. However, these measures are not typically measured clinically. When combined with measures of audibility, suprathreshold measures of auditory distortion may provide a much more complete understanding of speech deficits in noise by HI individuals. Purpose: The primary goal of this study was to investigate the relationship among measures of speech recognition in noise, frequency selectivity, temporal acuity, modulation masking release, and informational masking in adult and elderly patients with sensorineural hearing loss to determine whether peripheral distortion for suprathreshold sounds contributes to the varied outcomes experienced by patients with sensorineural hearing loss listening to speech in noise. Research Design: A correlational study. Study Sample: Twenty-seven patients with sensorineural hearing loss and four adults with normal hearing were enrolled in the study. Data Collection and Analysis: The data were collected in a sound attenuated test booth. For speech testing, subjects' verbal responses were scored by the experimenter and entered into a custom computer program. For frequency selectivity and temporal acuity measures, subject responses were recorded via a touch screen. Simple correlation, step-wise multiple linear regression analyses and a repeated analysis of variance were performed. Results: Results showed that the signal-to-noise ratio (SNR) loss could only be partially predicted by a listener's thresholds or audibility measures such as the Speech Intelligibility Index (SII). Correlations between SII and SNR loss were higher using the Hearing-in-Noise Test (HINT) than the Quick Speech-in-Noise test (QSIN) with the SII accounting for 71% of the variance in SNR loss for the HINT but only 49% for the QSIN. However, listener age and the addition of suprathreshold measures improved the prediction of SNR loss using the QSIN, accounting for nearly 71% of the variance. Conclusions: Two standard clinical speech-in-noise tests, QSIN and HINT, were used in this study to obtain a measure of SNR loss. When administered clinically, the QSIN appears to be less redundant with hearing thresholds than the HINT and is a better indicator of a patient's suprathreshold deficit and its impact on understanding speech in noise. Additional factors related to aging, spectral resolution, and, to a lesser extent, temporal resolution improved the ability to predict SNR loss measured with the QSIN. For the HINT, a listener's audibility and age were the only two significant factors. For both QSIN and HINT, roughly 25–30% of the variance in individual differences in SNR loss (i.e., the dB difference in SNR between an individual HI listener and a control group of NH listeners at a specified performance level, usually 50% word or sentence recognition) remained unexplained, suggesting the need for additional measures of suprathreshold acuity (e.g., sensitivity to temporal fine structure) or cognitive function (e.g., memory and attention) to further improve the ability to understand individual variability in SNR loss.


1988 ◽  
Vol 31 (2) ◽  
pp. 299-303 ◽  
Author(s):  
Stephanie A. Davidson ◽  
William Melnick

Psychophysical tuning curves were generated by normally hearing and hearing-impaired subjects using two methods; a detailed laboratory method and a Bekesy method proposed as suitable for clinical use. The two methods were compared for stability, the amount of masking produced and the pattern of the masking functions. The two measures of frequency resolution were found to be equally reliable and showed the same range of repeatability as simple pure tone thresholds. The patterns of the masking functions were similar regardless of the method used. However, the absolute amounts of masking indicated with each method were significantly different, with more masking obtained when the clinical method was used.


1983 ◽  
Vol 74 (4) ◽  
pp. 1190-1199 ◽  
Author(s):  
Richard S. Tyler ◽  
Elizabeth J. Wood ◽  
Mariano Fernandes

Sign in / Sign up

Export Citation Format

Share Document