Attention to Facial Regions in Segmental and Prosodic Visual Speech Perception Tasks

1999 ◽  
Vol 42 (3) ◽  
pp. 526-539 ◽  
Author(s):  
Charissa R. Lansing ◽  
George W. McConkie

Two experiments were conducted to test the hypothesis that visual information related to segmental versus prosodic aspects of speech is distributed differently on the face of the talker. In the first experiment, eye gaze was monitored for 12 observers with normal hearing. Participants made decisions about segmental and prosodic categories for utterances presented without sound. The first experiment found that observers spend more time looking at and direct more gazes toward the upper part of the talker's face in making decisions about intonation patterns than about the words being spoken. The second experiment tested the Gaze Direction Assumption underlying Experiment 1—that is, that people direct their gaze to the stimulus region containing information required for their task. In this experiment, 18 observers with normal hearing made decisions about segmental and prosodic categories under conditions in which face motion was restricted to selected areas of the face. The results indicate that information in the upper part of the talker's face is more critical for intonation pattern decisions than for decisions about word segments or primary sentence stress, thus supporting the Gaze Direction Assumption. Visual speech perception proficiency requires learning where to direct visual attention for cues related to different aspects of speech.

2019 ◽  
Vol 62 (2) ◽  
pp. 307-317 ◽  
Author(s):  
Jianghua Lei ◽  
Huina Gong ◽  
Liang Chen

Purpose The study was designed primarily to determine if the use of hearing aids (HAs) in individuals with hearing impairment in China would affect their speechreading performance. Method Sixty-seven young adults with hearing impairment with HAs and 78 young adults with hearing impairment without HAs completed newly developed Chinese speechreading tests targeting 3 linguistic levels (i.e., words, phrases, and sentences). Results Groups with HAs were more accurate at speechreading than groups without HA across the 3 linguistic levels. For both groups, speechreading accuracy was higher for phrases than words and sentences, and speechreading speed was slower for sentences than words and phrases. Furthermore, there was a positive correlation between years of HA use and the accuracy of speechreading performance; longer HA use was associated with more accurate speechreading. Conclusions Young HA users in China have enhanced speechreading performance over their peers with hearing impairment who are not HA users. This result argues against the perceptual dependence hypothesis that suggests greater dependence on visual information leads to improvement in visual speech perception.


1975 ◽  
Vol 40 (4) ◽  
pp. 481-492 ◽  
Author(s):  
Norman P. Erber

Hearing-impaired persons usually perceive speech by watching the face of the talker while listening through a hearing aid. Normal-hearing persons also tend to rely on visual cues, especially when they communicate in noisy or reverberant environments. Numerous clinical and laboratory studies on the auditory-visual performance of normal-hearing and hearing-impaired children and adults demonstrate that combined auditory-visual perception is superior to perception through either audition or vision alone. This paper reviews these studies and provides a rationale for routine evaluation of auditory-visual speech perception in audiology clinics.


Author(s):  
Paula M. T. Smeele ◽  
Dominic W. Massaro ◽  
Michael M. Cohen ◽  
Anne C. Sittig

Sign in / Sign up

Export Citation Format

Share Document