The influence of auditory-visual speech and clear speech on cross-language perceptual assimilation

2017 ◽  
Vol 92 ◽  
pp. 114-124
Author(s):  
Sarah E. Fenwick ◽  
Catherine T. Best ◽  
Chris Davis ◽  
Michael D. Tyler
1997 ◽  
Vol 40 (2) ◽  
pp. 432-443 ◽  
Author(s):  
Karen S. Helfer

Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped stimuli presented audiovisually in the presence of background noise at one of three signal-to-noise ratios. In Experiment 2, 9 participants returned for an additional assessment using auditory-only presentation. Results of these experiments showed significant effects of speaking mode (clear speech was easier to understand than was conversational speech) and presentation mode (auditoryvisual presentation led to better performance than did auditory-only presentation). The benefit of clear speech was greater for words occurring in the middle of sentences than for words at either the beginning or end of sentences for both auditory-only and auditory-visual presentation, whereas the greatest benefit from supplying visual cues was for words at the end of sentences spoken both clearly and conversationally. The total benefit from speaking clearly and supplying visual cues was equal to the sum of each of these effects. Overall, the results suggest that speaking clearly and providing visual speech information provide complementary (rather than redundant) information.


2019 ◽  
Vol 23 ◽  
pp. 233121651983786 ◽  
Author(s):  
Catherine L. Blackburn ◽  
Pádraig T. Kitterick ◽  
Gary Jones ◽  
Christian J. Sumner ◽  
Paula C. Stacey

Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1) or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech, the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single “independent noise” signal detection theory model predicted the overall visual speech benefit in some conditions but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration.


2004 ◽  
Vol 20 (4) ◽  
pp. 349-357 ◽  
Author(s):  
Ahmed M. Abdel-Khalek ◽  
Joaquin Tomás-Sabádo ◽  
Juana Gómez-Benito

Summary: To construct a Spanish version of the Kuwait University Anxiety Scale (S-KUAS), the Arabic and English versions of the KUAS have been separately translated into Spanish. To check the comparability in terms of meaning, the two Spanish preliminary translations were thoroughly scrutinized vis-à-vis both the Arabic and English forms by several experts. Bilingual subjects served to explore the cross-language equivalence of the English and Spanish versions of the KUAS. The correlation between the total scores on both versions was .93, and the t value was .30 (n.s.), denoting good similarity. The Alphas and 4-week test-retest reliabilities were greater than .84, while the criterion-related validity was .70 against scores on the trait subscale of the STAI. These findings denote good reliability and validity of the S-KUAS. Factor analysis yielded three high-loaded factors of Behavioral/Subjective, Cognitive/Affective, and Somatic Anxiety, equivalent to the original Arabic version. Female (n = 210) undergraduates attained significantly higher mean scores than their male (n = 102) counterparts. For the combined group of males and females, the correlation between the total score on the S-KUAS and age was -.17 (p < .01). By and large, the findings of the present study provide evidence of the utility of the S-KUAS in assessing trait anxiety levels in the Spanish undergraduate context.


2018 ◽  
Vol 54 (7) ◽  
pp. 1289-1289
Author(s):  
Margaret Friend ◽  
Erin Smolak ◽  
Yushuang Liu ◽  
Diane Poulin-Dubois ◽  
Pascal Zesiger

2012 ◽  
Author(s):  
Peter P. J. L. Verkoeijen ◽  
Samantha Bouwmeester ◽  
Gino Camp

Sign in / Sign up

Export Citation Format

Share Document