Early stages of audiovisual speech processing—a magnetoencephalography study

2007 ◽  
Vol 121 (5) ◽  
pp. 3044-3044
Author(s):  
Ingo Hertrich ◽  
Hermann Ackermann ◽  
Klaus Mathiak ◽  
Werner Lutzenberger
2006 ◽  
Vol 98 (1) ◽  
pp. 66-73 ◽  
Author(s):  
Roy H. Hamilton ◽  
Jeffrey T. Shenton ◽  
H. Branch Coslett

2015 ◽  
Vol 19 (2) ◽  
pp. 77-100 ◽  
Author(s):  
Przemysław Tomalski

Abstract Apart from their remarkable phonological skills young infants prior to their first birthday show ability to match the mouth articulation they see with the speech sounds they hear. They are able to detect the audiovisual conflict of speech and to selectively attend to articulating mouth depending on audiovisual congruency. Early audiovisual speech processing is an important aspect of language development, related not only to phonological knowledge, but also to language production during subsequent years. Th is article reviews recent experimental work delineating the complex developmental trajectory of audiovisual mismatch detection. Th e central issue is the role of age-related changes in visual scanning of audiovisual speech and the corresponding changes in neural signatures of audiovisual speech processing in the second half of the first year of life. Th is phenomenon is discussed in the context of recent theories of perceptual development and existing data on the neural organisation of the infant ‘social brain’.


2001 ◽  
Vol 18 (1) ◽  
pp. 9-21 ◽  
Author(s):  
Tsuhan Chen

2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand

It is widely accepted that seeing a talker improves a listener’s ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone—indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).


2018 ◽  
Vol 117 ◽  
pp. 454-471 ◽  
Author(s):  
Ana A. Francisco ◽  
Atsuko Takashima ◽  
James M. McQueen ◽  
Mark van den Bunt ◽  
Alexandra Jesse ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document