Actual and Predicted Word-Recognition Performance of Elderly Hearing-Impaired Listeners

1991 ◽  
Vol 34 (3) ◽  
pp. 636-642 ◽  
Author(s):  
Donald J. Schum ◽  
Lois J. Matthews ◽  
Fu-Shing Lee

Word-recognition scores in quiet and in noise were obtained from both ears of 101 elderly listeners demonstrating sensorineural hearing loss. These performance scores were compared to word-recognition scores predicted using Articulation Index analysis procedures Negative difference scores (actual performance less predicted performance) would reflect aspects of the hearing impairment and/or the aging process that extend beyond the simple speech audibility constraints imposed by the hearing loss and masking noise. The distributions for both the left and right ears of difference scores in quiet revealed the majority of scores to be grouped near 0. In contrast, both distributions of difference scores in noise were normally distributed around means of approximately –25. These results suggest that the typical elderly hearing-impaired listener should be expected to demonstrate word-recognition performance in quiet similar to that of a normally hearing listener, given the same level of audibility of the speech material. On the other hand, in noise, this typical listener may be expected to demonstrate some word-recognition performance decrement, even after accounting for the audibility constraints of the hearing loss and noise.

1991 ◽  
Vol 34 (5) ◽  
pp. 1180-1184 ◽  
Author(s):  
Larry E. Humes ◽  
Kathleen J. Nelson ◽  
David B. Pisoni

The Modified Rhyme Test (MRT), recorded using natural speech and two forms of synthetic speech, DECtalk and Votrax, was used to measure both open-set and closed-set speech-recognition performance. Performance of hearing-impaired elderly listeners was compared to two groups of young normal-hearing adults, one listening in quiet, and the other listening in a background of spectrally shaped noise designed to simulate the peripheral hearing loss of the elderly. Votrax synthetic speech yielded significant decrements in speech recognition compared to either natural or DECtalk synthetic speech for all three subject groups. There were no differences in performance between natural speech and DECtalk speech for the elderly hearing-impaired listeners or the young listeners with simulated hearing loss. The normal-hearing young adults listening in quiet out-performed both of the other groups, but there were no differences in performance between the young listeners with simulated hearing loss and the elderly hearing-impaired listeners. When the closed-set identification of synthetic speech was compared to its open-set recognition, the hearing-impaired elderly gained as much from the reduction in stimulus/response uncertainty as the two younger groups. Finally, among the elderly hearing-impaired listeners, speech-recognition performance was correlated negatively with hearing sensitivity, but scores were correlated positively among the different talker conditions. Those listeners with the greatest hearing loss had the most difficulty understanding speech and those having the most trouble understanding natural speech also had the greatest difficulty with synthetic speech.


2012 ◽  
Vol 2012 ◽  
pp. 1-3 ◽  
Author(s):  
Joseph P. Pillion

A case study is presented of a 17-year-old male who sustained an anoxic brain injury and sensorineural hearing loss secondary to carbon monoxide poisoning. Audiological data is presented showing a slightly asymmetrical hearing loss of sensorineural origin and mild-to-severe degree for both ears. Word recognition performance was fair to poor bilaterally for speech presented at normal conversational levels in quiet. Management considerations of the hearing loss are discussed.


1992 ◽  
Vol 35 (4) ◽  
pp. 942-949 ◽  
Author(s):  
Christopher W. Turner ◽  
David A. Fabry ◽  
Stephanie Barrett ◽  
Amy R. Horwitz

This study examined the possibility that hearing-impaired listeners, in addition to displaying poorer-than-normal recognition of speech presented in background noise, require a larger signal-to-noise ratio for the detection of the speech sounds. Psychometric functions for the detection and recognition of stop consonants were obtained from both normal-hearing and hearing-impaired listeners. Expressing the speech levels in terms of their short-term spectra, the detection of consonants for both subject groups occurred at the same signal-to-noise ratio. In contrast, the hearing-impaired listeners displayed poorer recognition performance than the normal-hearing listeners. These results imply that the higher signal-to-noise ratios required for a given level of recognition by some subjects with hearing loss are not due in part to a deficit in detection of the signals in the masking noise, but rather are due exclusively to a deficit in recognition.


2011 ◽  
Vol 22 (07) ◽  
pp. 405-423 ◽  
Author(s):  
Richard H. Wilson

Background: Since the 1940s, measures of pure-tone sensitivity and speech recognition in quiet have been vital components of the audiologic evaluation. Although early investigators urged that speech recognition in noise also should be a component of the audiologic evaluation, only recently has this suggestion started to become a reality. This report focuses on the Words-in-Noise (WIN) Test, which evaluates word recognition in multitalker babble at seven signal-to-noise ratios and uses the 50% correct point (in dB SNR) calculated with the Spearman-Kärber equation as the primary metric. The WIN was developed and validated in a series of 12 laboratory studies. The current study examined the effectiveness of the WIN materials for measuring the word-recognition performance of patients in a typical clinical setting. Purpose: To examine the relations among three audiometric measures including pure-tone thresholds, word-recognition performances in quiet, and word-recognition performances in multitalker babble for veterans seeking remediation for their hearing loss. Research Design: Retrospective, descriptive. Study Sample: The participants were 3430 veterans who for the most part were evaluated consecutively in the Audiology Clinic at the VA Medical Center, Mountain Home, Tennessee. The mean age was 62.3 yr (SD = 12.8 yr). Data Collection and Analysis: The data were collected in the course of a 60 min routine audiologic evaluation. A history, otoscopy, and aural-acoustic immittance measures also were included in the clinic protocol but were not evaluated in this report. Results: Overall, the 1000–8000 Hz thresholds were significantly lower (better) in the right ear (RE) than in the left ear (LE). There was a direct relation between age and the pure-tone thresholds, with greater change across age in the high frequencies than in the low frequencies. Notched audiograms at 4000 Hz were observed in at least one ear in 41% of the participants with more unilateral than bilateral notches. Normal pure-tone thresholds (≤20 dB HL) were obtained from 6% of the participants. Maximum performance on the Northwestern University Auditory Test No. 6 (NU-6) in quiet was ≥90% correct by 50% of the participants, with an additional 20% performing at ≥80% correct; the RE performed 1–3% better than the LE. Of the 3291 who completed the WIN on both ears, only 7% exhibited normal performance (50% correct point of ≤6 dB SNR). Overall, WIN performance was significantly better in the RE (mean = 13.3 dB SNR) than in the LE (mean = 13.8 dB SNR). Recognition performance on both the NU-6 and the WIN decreased as a function of both pure-tone hearing loss and age. There was a stronger relation between the high-frequency pure-tone average (1000, 2000, and 4000 Hz) and the WIN than between the pure-tone average (500, 1000, and 2000 Hz) and the WIN. Conclusions: The results on the WIN from both the previous laboratory studies and the current clinical study indicate that the WIN is an appropriate clinic instrument to assess word-recognition performance in background noise. Recognition performance on a speech-in-quiet task does not predict performance on a speech-in-noise task, as the two tasks reflect different domains of auditory function. Experience with the WIN indicates that word-in-noise tasks should be considered the “stress test” for auditory function.


2021 ◽  
Vol 2 ◽  
Author(s):  
Lasse Embøl ◽  
Carl Hutters ◽  
Andreas Junker ◽  
Daniel Reipur ◽  
Ali Adjorlu ◽  
...  

Cochlear implants (CI) enable hearing in individuals with sensorineural hearing loss, albeit with difficulties in speech perception and sound localization. In noisy environments, these difficulties are disproportionately greater for CI users than for children with no reported hearing loss. Parents of children with CIs are motivated to experience what CIs sound like, but options to do so are limited. This study proposes using virtual reality to simulate having CIs in a school setting with two contrasting settings: a noisy playground and a quiet classroom. To investigate differences between hearing conditions, an evaluation utilized a between-subjects design with 15 parents (10 female, 5 male; age M = 38.5, SD = 6.6) of children with CIs with no reported hearing loss. In the virtual environment, a word recognition and sound localization test using an open-set speech corpus compared differences between simulated unilateral CI, simulated bilateral CI, and normal hearing conditions in both settings. Results of both tests indicate that noise influences word recognition more than it influences sound localization, but ultimately affects both. Furthermore, bilateral CIs are equally to or significantly beneficial over having a simulated unilateral CI in both tests. A follow-up qualitative evaluation showed that the simulation enabled users to achieve a better understanding of what it means to be an hearing impaired child.


2017 ◽  
Vol 28 (01) ◽  
pp. 068-079
Author(s):  
Richard H. Wilson ◽  
Kadie C. Sharrett

AbstractTwo previous experiments from our laboratory with 70 interrupted monosyllabic words demonstrated that recognition performance was influenced by the temporal location of the interruption pattern. The interruption pattern (10 interruptions/sec, 50% duty cycle) was always the same and referenced word onset; the only difference between the patterns was the temporal location of the on- and off-segments of the interruption cycle. In the first study, both young and older listeners obtained better recognition performances when the initial on-segment coincided with word onset than when the initial on-segment was delayed by 50 msec. The second experiment with 24 young listeners detailed recognition performance as the interruption pattern was incremented in 10-msec steps through the 0- to 90-msec onset range. Across the onset conditions, 95% of the functions were either flat or U-shaped.To define the effects that interruption pattern locations had on word recognition by older listeners with sensorineural hearing loss as the interruption pattern incremented, re: word onset, from 0 to 90 msec in 10-msec steps.A repeated-measures design with ten interruption patterns (onset conditions) and one uninterruption condition.Twenty-four older males (mean = 69.6 yr) with sensorineural hearing loss participated in two 1-hour sessions. The three-frequency pure-tone average was 24.0 dB HL and word recognition was ≥80% correct.Seventy consonant-vowel nucleus-consonant words formed the corpus of materials with 25 additional words used for practice. For each participant, the 700 interrupted stimuli (70 words by 10 onset conditions), the 70 words uninterrupted, and two practice lists each were randomized and recorded on compact disc in 33 tracks of 25 words each.The data were analyzed at the participant and word levels and compared to the results obtained earlier on 24 young listeners with normal hearing. The mean recognition performance on the 70 words uninterrupted was 91.0% with an overall mean performance on the ten interruption conditions of 63.2% (range: 57.9–69.3%), compared to 80.4% (range: 73.0–87.7%) obtained earlier on the young adults. The best performances were at the extremes of the onset conditions. Standard deviations ranged from 22.1% to 28.1% (24 participants) and from 9.2% to 12.8% (70 words). An arithmetic algorithm categorized the shapes of the psychometric functions across the ten onset conditions. With the older participants in the current study, 40% of the functions were flat, 41.4% were U-shaped, and 18.6% were inverted U-shaped, which compared favorably to the function shapes by the young listeners in the earlier study of 50.0%, 41.4%, and 8.6%, respectively. There were two words on which the older listeners had 40% better performances.Collectively, the data are orderly, but at the individual word or participant level, the data are somewhat volatile, which may reflect auditory processing differences between the participant groups. The diversity of recognition performances by the older listeners on the ten interruption conditions with each of the 70 words supports the notion that the term hearing loss is inclusive of processes well beyond the filtering produced by end-organ sensitivity deficits.


1990 ◽  
Vol 55 (3) ◽  
pp. 417-426 ◽  
Author(s):  
Randall C. Beattie ◽  
Judy A. Zipp

Characteristics of the range of intensities yielding PB Max and of the threshold for monosyllabic words (PBT) were investigated in 110 elderly subjects with mild-to-moderate sensorineural hearing loss. Word recognition functions were generated using the Auditec recordings of the CID W-22 words with 50 words per level. The results indicated that (a) the range of intensities yielding PB Max was approximately 33 dB at a level corresponding to 12% below PB Max, (b) the PB Max range decreased as the magnitude of hearing loss increased, (c) testing at the loudness discomfort level was likely to provide a more accurate estimate of PB Max than testing at most comfortable listening level, (d) word recognition scores should be obtained at a minimum of two intensities in order to estimate PB Max, (e) the PBT in dB SL re the spondaic threshold increased as the steepness of the audiogram increased, and (f) the PBT should not be considered unusual unless it exceeds the predicted value by about 14 dB.


1989 ◽  
Vol 54 (1) ◽  
pp. 20-32 ◽  
Author(s):  
Randall C. Beattie

Word recognition functions for Auditee recordings of the CID W-22 stimuli in multitalker noise were obtained using subjects with normal hearing and with mild-to-moderate sensorineural hearing loss. In the first experiment, word recognition functions were generated by varying the signal-to-noise ratio (S/N); whereas in the second experiment, a constant S/N was used and stimulus intensity was varied. The split-half reliability of word recognition scores for the normal-hearing and hearing-impaired groups revealed variability that agreed closely with predictions based on the simple binomial distribution. Therefore, the binomial model appears appropriate for estimating the variability of word recognition scores whether they are obtained in quiet or in a competing background noise. The reliability for threshold (50% point) revealed good stability. The slope of the recognition function was steeper for normal listeners than for the hearing-impaired subjects. Word recognition testing in noise can provide insight into the problems imposed by hearing loss, particularly when evaluating patients with mild hearing loss who exhibit no difficulties with conventional tests. Clinicians should employ a sufficient number of stimuli so that the test is adequately sensitive to differences among listening conditions.


2003 ◽  
Vol 14 (09) ◽  
pp. 453-470 ◽  
Author(s):  
Richard H. Wilson

A simple word-recognition task in multitalker babble for clinic use was developed in the course of four experiments involving listeners with normal hearing and listeners with hearing loss. In Experiments 1 and 2, psychometric functions for the individual NU No. 6 words from Lists 2, 3, and 4 were obtained with each word in a unique segment of multitalker babble. The test paradigm that emerged involved ten words at each of seven signal-to-babble ratios (S/B) from 0 to 24 dB. Experiment 3 examined the effect that babble presentation level (70, 80, and 90 dB SPL) had on recognition performance in babble, whereas Experiment 4 studied the effect that monaural and binaural listening had on recognition performance. For listeners with normal hearing, the 90th percentile was 6 dB S/B. In comparison to the listeners with normal hearing, the 50% correct points on the functions for listeners with hearing loss were at 5 to 15 dB higher signal-to-babble ratios.


1991 ◽  
Vol 34 (3) ◽  
pp. 686-693 ◽  
Author(s):  
Larry E. Humes ◽  
Laurel Christopherson

This study examined the performance of four subject groups on several temporally based measures of auditory processing and several measures of speech identification. The four subject groups were (a) young normal-hearing adults; (b) hearing-impaired elderly subjects ranging in age from 65 to 75 years; (c) hearing-impaired elderly adults ranging in age from 76 to 86 years; and (d) young normal-hearing listeners with hearing loss simulated with a spectrally shaped masking noise adjusted to match the actual hearing loss of the two elderly groups. In addition to between-group analyses of performance on the auditory processing and speech identification tasks, correlational and regression analyses within the two groups of elderly hearing-impaired listeners were performed. The results revealed that the threshold elevation accompanying sensorineural hearing loss was the primary factor affecting the speech identification performance of the hearing-impaired elderly subjects both as groups and as individuals. However, significant increases in the proportion of speech identification score variance accounted for were obtained in the elderly subjects by including various measures of auditory processing.


Sign in / Sign up

Export Citation Format

Share Document