Predicting Audiovisual Consonant Recognition Performance of Hearing-Impaired Adults

1974 ◽  
Vol 17 (2) ◽  
pp. 270-278 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Don W. Worthington

The redundancy between the auditory and visual recognition of consonants was studied in 100 hearing-impaired subjects who demonstrated a wide range of speech-discrimination abilities. Twenty English consonants, recorded in CV combination with the vowel /a/, were presented to the subjects for auditory, visual, and audiovisual identification. There was relatively little variation among subjects in the visual recognition of consonants. A measure of the expected degree of redundancy between an observer’s auditory and visual confusions among consonants was used in an effort to predict audiovisual consonant recognition ability. This redundancy measure was based on an information analysis of an observer’s auditory confusions among consonants and expressed the degree to which his auditory confusions fell within categories of visually homophenous consonants. The measure was found to have moderate predictive value in estimating an observer’s audiovisual consonant recognition score. These results suggest that the degree of redundancy between an observer’s auditory and visual confusions of speech elements is a determinant in the benefit that visual cues offer to that observer.

1978 ◽  
Vol 43 (3) ◽  
pp. 331-347 ◽  
Author(s):  
Elmer Owens

An analysis of consonant errors for hearing-impaired subjects in a multiple-choice format revealed that about 14 consonants caused most of the difficulty in consonant recognition. For a given consonant, error probability was typically lower in the initial position of the stimulus word than in the final position. When errors were made, the substitutions were limited typically to two or three other consonants, with a greater variety occurring for consonants in the final position. Substitutions tended to be the same over a wide range of pure-tone configurations. Place errors were predominant, but manner errors also occurred. In only a few instances did specific relationships occur between particular stimulus consonants and pure-tone configurations. With knowledge of the error consonants and typical substitutions, auditory recognition of consonants can be improved by programmed instruction methods. Shaping can be accomplished by a manipulation of the response foils (choices). Since it has been shown that visual recognition of consonants can also be improved, advantage can be taken of both the visual and auditory modalities in remedial procedures. Frequency of usage in the language should be considered in the ordering of consonants for retraining purposes. Work in consonant recognition should be beneficial to the hearing-impaired patient as part of a total rehabilitation program.


1975 ◽  
Vol 18 (2) ◽  
pp. 272-280 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Don W. Worthington

Auditory and audiovisual consonant recognition were studied in 98 hearing-impaired adults, who demonstrated a wide range of consonant-recognition abilities. Information transfer analysis was used to describe the performance of the subjects on the auditory and audiovisual tasks in terms of a set of articulatory features. Visual cues substantially enhanced the transmission of duration, place-of-articulation, frication, and nasality features, but had considerably less effect on transmission of the liquid-glide and voicing features. The improvement in transmission resulting from visual cues was relatively constant across a wide range of auditory performance levels.


1981 ◽  
Vol 24 (2) ◽  
pp. 207-216 ◽  
Author(s):  
Brian E. Walden ◽  
Sue A. Erdman ◽  
Allen A. Montgomery ◽  
Daniel M. Schwartz ◽  
Robert A. Prosek

The purpose of this research was to determine some of the effects of consonant recognition training on the speech recognition performance of hearing-impaired adults. Two groups of ten subjects each received seven hours of either auditory or visual consonant recognition training, in addition to a standard two-week, group-oriented, inpatient aural rehabilitation program. A third group of fifteen subjects received the standard two-week program, but no supplementary individual consonant recognition training. An audiovisual sentence recognition test, as well as tests of auditory and visual consonant recognition, were administered both before and ibltowing training. Subjects in all three groups significantly increased in their audiovisual sentence recognition performance, but subjects receiving the individual consonant recognition training improved significantly more than subjects receiving only the standard two-week program. A significant increase in consonant recognition performance was observed in the two groups receiving the auditory or visual consonant recognition training. The data are discussed from varying statistical and clinical perspectives.


1982 ◽  
Vol 25 (1) ◽  
pp. 135-141 ◽  
Author(s):  
Judy R. Dubno ◽  
Donald D. Dirks

The reliability of a closed-set Nonsense-Syllable Test was determined on a group of 38 listeners with mild-to-moderate sensorineural hearing loss. Eight randomizations of the 91-item test (four trials on each of two days) were presented monaurally, under earphones, at 90 dB SPL with a cafeteria background noise set at a +20-dB S/N ratio. Performance under these conditions ranged from 21.4 to 91.2%, reflecting the wide range of syllable-recognition ability of these subjects. Reliability of the eight measurements was determined by analysis of variance and analysis of covariance structure (parallel-test modelling) for the entire test and each of 11 subtests. Overall and individual subject results failed to show any systematic differences in scores over eight trials. Likewise, no significant differences were found in performance on individual syllables, nor were changes in the relative occurrence of specific syllable confusions noted. This test is highly reliable when evaluating hearing-impaired subjects, and thus is appropriate for use in investigations where identical items are administered under multiple experimental conditions.


1977 ◽  
Vol 20 (1) ◽  
pp. 130-145 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Allen A. Montgomery ◽  
Charlene K. Scherr ◽  
Carla J. Jones

Visual recognition of consonants was studied in 31 hearing-impaired adults before and after 14 hours of concentrated, individualized, speechreading training. Confusions were analyzed via a hierarchical clustering technique to derive categories of visual contrast among the consonants. Pretraining and posttraining results were compared to reveal the effects of the training program. Training caused an increase in the number of visemes consistently recognized and an increase in the percentage of within-viseme responses. Analysis of the responses made revealed that most changes in consonant recognition occurred during the first few hours of training.


1969 ◽  
Vol 12 (2) ◽  
pp. 423-425 ◽  
Author(s):  
Norman P. Erber

Audio-visual observation of spoken spondaic words was found to be superior to recognition via audition-only under a wide range of S/N conditions. Data from five subjects supported the notion that observers rely increasingly more on visual cues for speech information as S/N ratio is degraded. Audition-only performance was found to be less variable among subjects than was audio-visual recognition. Increased variability in audio-visual scores at poorer S/N ratios was attributed to differences in lip-reading skill among untrained subjects. Speech levels so low that recognition by audition-only approximated chance behavior were found, nevertheless, to systematically improve observers' audio-visual scores as a function of increasing S/N ratio.


1991 ◽  
Vol 34 (6) ◽  
pp. 1397-1409 ◽  
Author(s):  
Carol Goldschmidt Hustedde ◽  
Terry L. Wiley

Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory—Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal hearing and hearing-impaired listeners.


Author(s):  
Amin Ebrahimi ◽  
Mohammad Ebrahim Mahdavi ◽  
Hamid Jalilvand

Background and Aim: Digits are suitable speech materials for evaluating recognition of speech-in-noise in clients with the wide range of language abilities. Farsi Auditory Recognition of Digit-in-Noise (FARDIN) test has been deve­loped and validated in learning-disabled child­ren showing dichotic listening deficit. This stu­dy was conducted for further validation of FARDIN and to survey the effects of noise type on the recognition performance in individuals with sensory-neural hearing impairment. Methods: Persian monosyllabic digits 1−10 were extracted from the audio file of FARDIN test. Ten lists were compiled using a random order of the triplets. The first five lists were mixed with multi-talker babble noise (MTBN) and the second five lists with speech-spectrum noise (SSN). Signal- to- noise ratio (SNR) var­ied from +5 to −15 in 5 dB steps. 20 normal hearing and 19 hearing-impaired individuals participated in the current study. Results: Both types of noise could differentiate the hearing loss from normal hearing. Hearing-impaired group showed weaker performance for digit recognition in MTBN and SSN and needed 4−5.6 dB higher SNR (50%), compared to the normal hearing group. MTBN was more challenging for normal hearing than SSN. Conclusion: Farsi Auditory Recognition of Digit-in-Noise is a validated test for estimating SNR (50%) in clients with hearing loss. It seems SSN is more appropriate for using as a back­ground noise for testing the performance of aud­itory recognition of digit-in-noise.   Keywords: Auditory recognition; hearing loss; speech perception in noise; digit recognition in noise


1991 ◽  
Vol 34 (2) ◽  
pp. 415-426 ◽  
Author(s):  
Richard L. Freyman ◽  
G. Patrick Nerbonne ◽  
Heather A. Cote

This investigation examined the degree to which modification of the consonant-vowel (C-V) intensity ratio affected consonant recognition under conditions in which listeners were forced to rely more heavily on waveform envelope cues than on spectral cues. The stimuli were 22 vowel-consonant-vowel utterances, which had been mixed at six different signal-to-noise ratios with white noise that had been modulated by the speech waveform envelope. The resulting waveforms preserved the gross speech envelope shape, but spectral cues were limited by the white-noise masking. In a second stimulus set, the consonant portion of each utterance was amplified by 10 dB. Sixteen subjects with normal hearing listened to the unmodified stimuli, and 16 listened to the amplified-consonant stimuli. Recognition performance was reduced in the amplified-consonant condition for some consonants, presumably because waveform envelope cues had been distorted. However, for other consonants, especially the voiced stops, consonant amplification improved recognition. Patterns of errors were altered for several consonant groups, including some that showed only small changes in recognition scores. The results indicate that when spectral cues are compromised, nonlinear amplification can alter waveform envelope cues for consonant recognition.


Sign in / Sign up

Export Citation Format

Share Document