visual speech perception
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 10)

H-INDEX

23
(FIVE YEARS 1)

Author(s):  
Dorien Ceuleers ◽  
Ingeborg Dhooge ◽  
Sofie Degeest ◽  
Hanneleen Van Steen ◽  
Hannah Keppler ◽  
...  

2020 ◽  
Vol 16 (S4) ◽  
Author(s):  
Shannon L. Risacher ◽  
Carolyn J. Herbert ◽  
Aaron Vosmeier ◽  
Makaylah N. Garrett ◽  
John D. West ◽  
...  

2020 ◽  
Vol 148 (4) ◽  
pp. 2747-2747
Author(s):  
Sandie Keerstock ◽  
Kirsten Meemann ◽  
Sarah M. Ransom ◽  
Rajka Smiljanic

2020 ◽  
Vol 41 (3) ◽  
pp. 549-560 ◽  
Author(s):  
Mitchell S. Sommers ◽  
Brent Spehar ◽  
Nancy Tye-Murray ◽  
Joel Myerson ◽  
Sandra Hale

2020 ◽  
Author(s):  
◽  
Katharina Dorn

The importance of considering speech perception and language acquisition as a multimodal phenomenon, that is to say an audio-visual phenomenon, can hardly be ignored in light of recent evidence. Research from this perspective has demonstrated that young infants are sensitive to audio-visual match in auditory (i.e. syllables, vowels and utterances) and visual (i.e. mouth movements) native and non-native speech, even when presented sequentially. Over time, as they gain more experience, infants’ perception and processing of native language attributes increases, while this sensitivity seems to decline for non-native attributes (perceptual narrowing). Empirical findings in the field of perceptual narrowing are ambiguous with regard to the beginning and the extent of this tuning phenomenon, but there is evidence that factors such as the richness and presentation of the stimuli play a crucial role. Recently, there has been renewed interest in the topic of face-scanning behavior, mainly because eye-tracking devices have made more objective and precise analyses of infants’ gaze patterns possible. Face-scanning behavior is directly associated with audio-visual speech processing, and both have an impact on infants’ future expressive language development. However, no previous study has ever examined the distance between the native and non-native language in the context of audio-visual speech processing. This is illustrated by the fact that previously studies have exclusively considered more distant languages belonging to different rhythm classes, not closer languages belonging to the same rhythm class. Languages that largely do not differ in global rhythmic-prosodic cues but for instance in more specific phonological and phonetic attributes might impact audio-visual matching and face-scanning behavior in early infancy. This influence might provide insights into how fine-grained these perception and processing mechanisms are marked during infancy, when they narrow in the direction of the infant’s native language, and which facial areas infants draw on at different time points during infancy to obtain enough (redundant) cues to acquire their native language(s). Furthermore, no previous studies have combined a longitudinal perspective on infants with a cross-linguistic view of languages in order to reduce inter-individual differences across age groups and generalize the emergence of perceptual narrowing as a cross-linguistic phenomenon. Hence, the present synopsis comprises three studies that address these perspectives on early audio-visual speech perception of languages belonging to the same rhythm class among infants by investigating early audio-visual matching sensitivities (Study 1), the occurrence of perceptual narrowing (Study 2), and face-scanning behavior during the first year of life and its impact on the infants’ future expressive vocabulary (Study 3). It summarizes the current state of the (empirical) literature in subjects such as speech perception, language discrimination and face-scanning behavior before identifying important research gaps, pointing out relevant research questions, presenting the design(s) and the main results of the three empirical studies, and finally discussing the findings and the consequential possible implications for future research and practice. The studies are based on self-collected data from the Bamberg Baby Institute at the University of Bamberg (Germany) and the Uppsala Child and Baby Lab at Uppsala University (Sweden). Whereas the first and second study were based on a cross-linguistic dataset of German and Swedish infants, the third study’s dataset consisted only of German infants who were further followed longitudinally.


Author(s):  
Orsolya B. Kolozsvári ◽  
Weiyong Xu ◽  
Paavo H. T. Leppänen ◽  
Jarmo A. Hämäläinen

2019 ◽  
Vol 62 (2) ◽  
pp. 307-317 ◽  
Author(s):  
Jianghua Lei ◽  
Huina Gong ◽  
Liang Chen

Purpose The study was designed primarily to determine if the use of hearing aids (HAs) in individuals with hearing impairment in China would affect their speechreading performance. Method Sixty-seven young adults with hearing impairment with HAs and 78 young adults with hearing impairment without HAs completed newly developed Chinese speechreading tests targeting 3 linguistic levels (i.e., words, phrases, and sentences). Results Groups with HAs were more accurate at speechreading than groups without HA across the 3 linguistic levels. For both groups, speechreading accuracy was higher for phrases than words and sentences, and speechreading speed was slower for sentences than words and phrases. Furthermore, there was a positive correlation between years of HA use and the accuracy of speechreading performance; longer HA use was associated with more accurate speechreading. Conclusions Young HA users in China have enhanced speechreading performance over their peers with hearing impairment who are not HA users. This result argues against the perceptual dependence hypothesis that suggests greater dependence on visual information leads to improvement in visual speech perception.


Sign in / Sign up

Export Citation Format

Share Document