Coarticulation Effects in Lipreading

1982 ◽  
Vol 25 (4) ◽  
pp. 600-607 ◽  
Author(s):  
Andre-Pierre Benguerel ◽  
Margaret Kathleen Pichora-Fuller

Normal-hearing and hearing-impaired subjects with good lipreading skills lipread videotaped material under visual-only conditions. V 1 CV 2 utterances were used where V could he /i/, /æ/ or/u/ and C could be /p/, /t/, /k/, /t∫/, /f/, /Θ/, /s/, /∫/ or/w/.Coarticulatory effects were present in these stimuli. The influence of phonetic context on lipreading scores for each V and C was analyzed in an effort to explain some of the variability in the visual perception of phonemes which was suggested by existing literature. Transmission of information for four phonetic features was also analyzed. Lipreading performance was nearly perfect for/p/,/f7,/w/,/Θ/and/u/. Lipreading performance on/t/,/k/,/t∫/,/∫/,/s/,/i/and/æ/depended on context. The features labial, rounded, and alveolar or palatal place of articulation were found to transmit more information to lipreaders than did the feature continuant. Variability in articulatory parameters resulting from coarticulatory effects appears to increase overall lipreading difficulty.

1975 ◽  
Vol 40 (4) ◽  
pp. 481-492 ◽  
Author(s):  
Norman P. Erber

Hearing-impaired persons usually perceive speech by watching the face of the talker while listening through a hearing aid. Normal-hearing persons also tend to rely on visual cues, especially when they communicate in noisy or reverberant environments. Numerous clinical and laboratory studies on the auditory-visual performance of normal-hearing and hearing-impaired children and adults demonstrate that combined auditory-visual perception is superior to perception through either audition or vision alone. This paper reviews these studies and provides a rationale for routine evaluation of auditory-visual speech perception in audiology clinics.


1984 ◽  
Vol 27 (1) ◽  
pp. 112-118 ◽  
Author(s):  
Deborah Johnson ◽  
Patricia Whaley ◽  
M. F. Dorman

To assess whether young hearing-impaired listeners are as sensitive as normal-hearing children to the cues for stop consonant voicing, we presented stimuli from along VOT continua to young normal-hearing listeners and to listeners with mild, moderate, severe, and profound hearing impairments. The response measures were the location of the phonetic boundaries, the change in boundaries with changes in place of articulation, and response variability. The listeners with normal hearing sensitivity and those with mild and moderate hearing impairments did not differ in performance on any response measure. The listeners with severe impairments did not show the expected change in VOT boundary with changes in place of articulation. Moreover, stimulus uncertainty (i.e., the number of possible choices in the response set) affected their response variability. One listener with profound impairment was able to process the cues for voicing in a normal fashion under conditions of minimum stimulus uncertainty. We infer from these results that the cochlear damage which underlies mild and moderate hearing impairment does not significantly alter the auditory representation of VOT. However, the cochlear damage underlying severe impairment, possibly interacting with high signal presentation levels, does alter the auditory representation of VOT.


2020 ◽  
Vol 63 (4) ◽  
pp. 1299-1311 ◽  
Author(s):  
Timothy Beechey ◽  
Jörg M. Buchholz ◽  
Gitte Keidser

Objectives This study investigates the hypothesis that hearing aid amplification reduces effort within conversation for both hearing aid wearers and their communication partners. Levels of effort, in the form of speech production modifications, required to maintain successful spoken communication in a range of acoustic environments are compared to earlier reported results measured in unaided conversation conditions. Design Fifteen young adult normal-hearing participants and 15 older adult hearing-impaired participants were tested in pairs. Each pair consisted of one young normal-hearing participant and one older hearing-impaired participant. Hearing-impaired participants received directional hearing aid amplification, according to their audiogram, via a master hearing aid with gain provided according to the NAL-NL2 fitting formula. Pairs of participants were required to take part in naturalistic conversations through the use of a referential communication task. Each pair took part in five conversations, each of 5-min duration. During each conversation, participants were exposed to one of five different realistic acoustic environments presented through highly open headphones. The ordering of acoustic environments across experimental blocks was pseudorandomized. Resulting recordings of conversational speech were analyzed to determine the magnitude of speech modifications, in terms of vocal level and spectrum, produced by normal-hearing talkers as a function of both acoustic environment and the degree of high-frequency average hearing impairment of their conversation partner. Results The magnitude of spectral modifications of speech produced by normal-hearing talkers during conversations with aided hearing-impaired interlocutors was smaller than the speech modifications observed during conversations between the same pairs of participants in the absence of hearing aid amplification. Conclusions The provision of hearing aid amplification reduces the effort required to maintain communication in adverse conditions. This reduction in effort provides benefit to hearing-impaired individuals and also to the conversation partners of hearing-impaired individuals. By considering the impact of amplification on both sides of dyadic conversations, this approach contributes to an increased understanding of the likely impact of hearing impairment on everyday communication.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


1979 ◽  
Vol 22 (2) ◽  
pp. 236-246 ◽  
Author(s):  
Jeffrey L. Danhauer ◽  
Ruth M. Lawarre

Perceptual patterns in rating dissimilarities among 24 CVs were investigated for a group of normal-hearing and two groups of hearing-impaired subjects (one group with flat, and one group with sloping, sensorineural losses). Stimuli were presented binaurally at most comfortable loudness level and subjects rated the 576 paired stimuli on a 1–7 equal-appearing interval scale. Ratings were submitted to individual group and combined INDSCAL analyses to describe features used by the subjects in their perception of the speech stimuli. Results revealed features such as sibilant, sonorant, plosive and place. Furthermore, normal and hearing-impaired subjects used similar features, and subjects' weightings of features were relatively independent of their audiometric configurations. Results are compared to those of previous studies.


1976 ◽  
Vol 19 (2) ◽  
pp. 279-289 ◽  
Author(s):  
Randall B. Monsen

Although it is well known that the speech produced by the deaf is generally of low intelligibility, the sources of this low speech intelligibility have generally been ascribed either to aberrant articulation of phonemes or inappropriate prosody. This study was designed to determine to what extent a nonsegmental aspect of speech, formant transitions, may differ in the speech of the deaf and of the normal hearing. The initial second formant transitions of the vowels /i/ and /u/ after labial and alveolar consonants (/b, d, f/) were compared in the speech of six normal-hearing and six hearing-impaired adolescents. In the speech of the hearing-impaired subjects, the second formant transitions may be reduced both in time and in frequency. At its onset, the second formant may be nearer to its eventual target frequency than in the speech of the normal subjects. Since formant transitions are important acoustic cues for the adjacent consonants, reduced F 2 transitions may be an important factor in the low intelligibility of the speech of the deaf.


Sign in / Sign up

Export Citation Format

Share Document