Speech Recognition in Amplitude-Modulated Noise of Listeners With Normal and Listeners With Impaired Hearing

1995 ◽  
Vol 38 (1) ◽  
pp. 222-233 ◽  
Author(s):  
Laurie S. Eisenberg ◽  
Donald D. Dirks ◽  
Theodore S. Bell

The effect of amplitude-modulated (AM) noise on speech recognition in listeners with normal and impaired hearing was investigated in two experiments. In the first experiment, nonsense syllables were presented in high-pass steady-state or AM noise to determine whether the release from masking in AM noise relative to steady-state noise was significantly different between normal-hearing and hearing-impaired subjects when the two groups listened under equivalent masker conditions. The normal-hearing subjects were tested in the experimental noise under two conditions: (a) in a spectrally shaped broadband noise that produced pure tone thresholds equivalent to those of the hearing-impaired subjects, and (b) without the spectrally shaped broadband noise. The release from masking in AM noise was significantly greater for the normal-hearing group than for either the hearing-impaired or masked normal-hearing groups. In the second experiment, normal-hearing and hearing-impaired subjects identified nonsense syllables in isolation and target words in sentences in steady-state or AM noise adjusted to approximate the spectral shape and gain of a hearing aid prescription. The release from masking was significantly less for the subjects with impaired hearing. These data suggest that hearingimpaired listeners obtain less release from masking in AM noise than do normal-hearing listeners even when both the speech and noise are presented at levels that are above threshold over much of the speech frequency range.

2013 ◽  
Vol 24 (04) ◽  
pp. 274-292 ◽  
Author(s):  
Van Summers ◽  
Matthew J. Makashay ◽  
Sarah M. Theodoroff ◽  
Marjorie R. Leek

Background: It is widely believed that suprathreshold distortions in auditory processing contribute to the speech recognition deficits experienced by hearing-impaired (HI) listeners in noise. Damage to outer hair cells and attendant reductions in peripheral compression and frequency selectivity may contribute to these deficits. In addition, reduced access to temporal fine structure (TFS) information in the speech waveform may play a role. Purpose: To examine how measures of peripheral compression, frequency selectivity, and TFS sensitivity relate to speech recognition performance by HI listeners. To determine whether distortions in processing reflected by these psychoacoustic measures are more closely associated with speech deficits in steady-state or modulated noise. Research Design: Normal-hearing (NH) and HI listeners were tested on tasks examining frequency selectivity (notched-noise task), peripheral compression (temporal masking curve task), and sensitivity to TFS information (frequency modulation [FM] detection task) in the presence of random amplitude modulation. Performance was tested at 500, 1000, 2000, and 4000 Hz at several presentation levels. The same listeners were tested on sentence recognition in steady-state and modulated noise at several signal-to-noise ratios. Study Sample: Ten NH and 18 HI listeners were tested. NH listeners ranged in age from 36 to 80 yr (M = 57.6). For HI listeners, ages ranged from 58 to 87 yr (M = 71.8). Results: Scores on the FM detection task at 1 and 2 kHz were significantly correlated with speech scores in both noise conditions. Frequency selectivity and compression measures were not as clearly associated with speech performance. Speech Intelligibility Index (SII) analyses indicated only small differences in speech audibility across subjects for each signal-to-noise ratio (SNR) condition that would predict differences in speech scores no greater than 10% at a given SNR. Actual speech scores varied by as much as 80% across subjects. Conclusions: The results suggest that distorted processing of audible speech cues was a primary factor accounting for differences in speech scores across subjects and that reduced ability to use TFS cues may be an important component of this distortion. The influence of TFS cues on speech scores was comparable in steady-state and modulated noise. Speech recognition was not related to audibility, represented by the SII, once high-frequency sensitivity differences across subjects (beginning at 5 kHz) were removed statistically. This might indicate that high-frequency hearing loss is associated with distortions in processing in lower-frequency regions.


1998 ◽  
Vol 41 (6) ◽  
pp. 1294-1306 ◽  
Author(s):  
Van Summers ◽  
Marjorie R. Leek

Normal-hearing and hearing-impaired listeners were tested to determine F0 difference limens for synthetic tokens of 5 steady-state vowels. The same stimuli were then used in a concurrent-vowel labeling task with the F0 difference between concurrent vowels ranging between 0 and 4 semitones. Finally, speech recognition was tested for synthetic sentences in the presence of a competing synthetic voice with the same, a higher, or a lower F0. Normal-hearing listeners and hearing-impaired listeners with small F0-discrimination (ΔF0) thresholds showed improvements in vowel labeling when there were differences in F0 between vowels on the concurrent-vowel task. Impaired listeners with high ΔF0 thresholds did not benefit from F0 differences between vowels. At the group level, normalhearing listeners benefited more than hearing-impaired listeners from F0 differences between competing signals on both the concurrent-vowel and sentence tasks. However, for individual listeners, ΔF0 thresholds and improvements in concurrent-vowel labeling based on F0 differences were only weakly associated with F0-based improvements in performance on the sentence task. For both the concurrent-vowel and sentence tasks, there was evidence that the ability to benefit from F0 differences between competing signals decreases with age.


2018 ◽  
Author(s):  
Sarah Verhulst ◽  
Anna Warzybok

ABSTRACTThe degree to which supra-threshold hearing deficits affect speech recognition in noise is poorly understood. To clarify the role of hearing sensitivity in different stimulus frequency ranges, and to test the contribution of low- and high-pass speech information to broadband speech recognition, we collected speech reception threshold (SRTs) for low-pass (LP < 1.5 kHz), high-pass (HP > 1.6 kHz) and broadband (BB) speech-in-noise stimuli in 34 listeners. Two noise types with similar long-term spectra were considered: stationary (SSN) and temporally modulated noise (ICRA5-250). Irrespective of the tested listener group (i.e., young normal-hearing, older normal- or impaired-hearing), the BB SRT performance was strongly related to the LP SRT. The encoding of LP speech information was different for SSN and ICRA5-250 noise but similar for HP speech, suggesting a single noise-type invariant coding mechanism for HP speech. Masking release was observed for all filtered conditions and related to the ICRA5-250 SRT. Lastly, the role of hearing sensitivity to the SRT was studied using the speech intelligibility index (SII), which failed to predict the SRTs for the filtered speech conditions and for the older normal-hearing listeners. This suggests that supra-threshold hearing deficits are important contributors to the SRT of older normal-hearing listeners.


2002 ◽  
Vol 45 (5) ◽  
pp. 1027-1038 ◽  
Author(s):  
Rosalie M. Uchanski ◽  
Ann E. Geers ◽  
Athanassios Protopapas

Exposure to modified speech has been shown to benefit children with languagelearning impairments with respect to their language skills (M. M. Merzenich et al., 1998; P. Tallal et al., 1996). In the study by Tallal and colleagues, the speech modification consisted of both slowing down and amplifying fast, transitional elements of speech. In this study, we examined whether the benefits of modified speech could be extended to provide intelligibility improvements for children with severe-to-profound hearing impairment who wear sensory aids. In addition, the separate effects on intelligibility of slowing down and amplifying speech were evaluated. Two groups of listeners were employed: 8 severe-to-profoundly hearingimpaired children and 5 children with normal hearing. Four speech-processing conditions were tested: (1) natural, unprocessed speech; (2) envelope-amplified speech; (3) slowed speech; and (4) both slowed and envelope-amplified speech. For each condition, three types of speech materials were used: words in sentences, isolated words, and syllable contrasts. To degrade the performance of the normal-hearing children, all testing was completed with a noise background. Results from the hearing-impaired children showed that all varieties of modified speech yielded either equivalent or poorer intelligibility than unprocessed speech. For words in sentences and isolated words, the slowing-down of speech had no effect on intelligibility scores whereas envelope amplification, both alone and combined with slowing-down, yielded significantly lower scores. Intelligibility results from normal-hearing children listening in noise were somewhat similar to those from hearing-impaired children. For isolated words, the slowing-down of speech had no effect on intelligibility whereas envelope amplification degraded intelligibility. For both subject groups, speech processing had no statistically significant effect on syllable discrimination. In summary, without extensive exposure to the speech processing conditions, children with impaired hearing and children with normal hearing listening in noise received no intelligibility advantage from either slowed speech or envelope-amplified speech.


1994 ◽  
Vol 95 (5) ◽  
pp. 2992-2993
Author(s):  
Laurie S. Eisenberg ◽  
Donald D. Dirks ◽  
Theodore S. Bell

1967 ◽  
Vol 10 (2) ◽  
pp. 289-298 ◽  
Author(s):  
Charles Speaks

The effects of frequency filtering on intelligibility of synthetic sentences were studied on three normal-hearing listeners. Performance-intensity (P-I) functions were defined for several low-pass and high-pass frequency bands. The data were analyzed to determine the interactions of signal level and frequency range on performance. Intelligibility of synthetic sentences was found to be quite dependent upon low-frequency energy. The important frequency for identification of the materials was approximately 725 Hz. These results are compared with previous findings concerning the intelligibility of single words in quiet and in noise.


Sign in / Sign up

Export Citation Format

Share Document