Visual‐speech intelligibility for syllables: A comparison of conversational and clear speech

1995 ◽  
Vol 98 (5) ◽  
pp. 2983-2983
Author(s):  
Jean‐Pierre Gagné ◽  
Anne‐Josée Rochette
2019 ◽  
Vol 23 ◽  
pp. 233121651983786 ◽  
Author(s):  
Catherine L. Blackburn ◽  
Pádraig T. Kitterick ◽  
Gary Jones ◽  
Christian J. Sumner ◽  
Paula C. Stacey

Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1) or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech, the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single “independent noise” signal detection theory model predicted the overall visual speech benefit in some conditions but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration.


2017 ◽  
Vol 92 ◽  
pp. 114-124
Author(s):  
Sarah E. Fenwick ◽  
Catherine T. Best ◽  
Chris Davis ◽  
Michael D. Tyler

1997 ◽  
Vol 40 (2) ◽  
pp. 432-443 ◽  
Author(s):  
Karen S. Helfer

Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped stimuli presented audiovisually in the presence of background noise at one of three signal-to-noise ratios. In Experiment 2, 9 participants returned for an additional assessment using auditory-only presentation. Results of these experiments showed significant effects of speaking mode (clear speech was easier to understand than was conversational speech) and presentation mode (auditoryvisual presentation led to better performance than did auditory-only presentation). The benefit of clear speech was greater for words occurring in the middle of sentences than for words at either the beginning or end of sentences for both auditory-only and auditory-visual presentation, whereas the greatest benefit from supplying visual cues was for words at the end of sentences spoken both clearly and conversationally. The total benefit from speaking clearly and supplying visual cues was equal to the sum of each of these effects. Overall, the results suggest that speaking clearly and providing visual speech information provide complementary (rather than redundant) information.


2018 ◽  
Vol 37 (2) ◽  
pp. 159 ◽  
Author(s):  
Fatemeh Vakhshiteh ◽  
Farshad Almasganj ◽  
Ahmad Nickabadi

Lip-reading is typically known as visually interpreting the speaker's lip movements during speaking. Experiments over many years have revealed that speech intelligibility increases if visual facial information becomes available. This effect becomes more apparent in noisy environments. Taking steps toward automating this process, some challenges will be raised such as coarticulation phenomenon, visual units' type, features diversity and their inter-speaker dependency. While efforts have been made to overcome these challenges, presentation of a flawless lip-reading system is still under the investigations. This paper searches for a lipreading model with an efficiently developed incorporation and arrangement of processing blocks to extract highly discriminative visual features. Here, application of a properly structured Deep Belief Network (DBN)- based recognizer is highlighted. Multi-speaker (MS) and speaker-independent (SI) tasks are performed over CUAVE database, and phone recognition rates (PRRs) of 77.65% and 73.40% are achieved, respectively. The best word recognition rates (WRRs) achieved in the tasks of MS and SI are 80.25% and 76.91%, respectively. Resulted accuracies demonstrate that the proposed method outperforms the conventional Hidden Markov Model (HMM) and competes well with the state-of-the-art visual speech recognition works.


2005 ◽  
Vol 118 (3) ◽  
pp. 1931-1931
Author(s):  
Kenneth Konopka ◽  
Rajka Smiljanic ◽  
Ann Bradlow

2021 ◽  
Author(s):  
Hoyoung Yi ◽  
Ashly Pingsterhaus ◽  
Woonyoung Song

The coronavirus pandemic has resulted in recommended/required use of a face mask in public. The use of a face mask compromises communication, especially in the presence of competing noise. It is crucial to measure potential adverse effect(s) of wearing face masks on speech intelligibility in communication contexts where excessive background noise occurs to lead to solutions for this communication challenge. Accordingly, effects of wearing transparent face masks and using clear speech to support better verbal communication was evaluated here. We evaluated listener word identification scores in the following four conditions: (1) type of masking (i.e., no mask, transparent mask, and disposable paper mask), (2) presentation mode (i.e., auditory only and audiovisual), (3) speaker speaking style (i.e., conversational speech and clear speech), and (4) with two types of background noise (i.e., speech shaped noise and four-talker babble at negative 5 signal to noise ratio levels). Results showed that in the presence of noise, listeners performed less well when the speaker wore a disposable paper mask or a transparent mask compared to wearing no mask. Listeners correctly identified more words in the audiovisual when listening to clear speech. Results indicate the combination of face masks and the presence of background noise impact speech intelligibility negatively for listeners. Transparent masks facilitate the ability to understand target sentences by providing visual information. Use of clear speech was shown to alleviate challenging communication situations including lack of visual cues and reduced acoustic signal.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Hyojin Park ◽  
Christoph Kayser ◽  
Gregor Thut ◽  
Joachim Gross

During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker’s lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker’s lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing.


Sign in / Sign up

Export Citation Format

Share Document