scholarly journals Combined predictive effects of sentential and visual constraints in early audiovisual speech processing

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Heidi Solberg Økland ◽  
Ana Todorović ◽  
Claudia S. Lüttke ◽  
James M. McQueen ◽  
Floris P. de Lange
2006 ◽  
Vol 98 (1) ◽  
pp. 66-73 ◽  
Author(s):  
Roy H. Hamilton ◽  
Jeffrey T. Shenton ◽  
H. Branch Coslett

2015 ◽  
Vol 19 (2) ◽  
pp. 77-100 ◽  
Author(s):  
Przemysław Tomalski

Abstract Apart from their remarkable phonological skills young infants prior to their first birthday show ability to match the mouth articulation they see with the speech sounds they hear. They are able to detect the audiovisual conflict of speech and to selectively attend to articulating mouth depending on audiovisual congruency. Early audiovisual speech processing is an important aspect of language development, related not only to phonological knowledge, but also to language production during subsequent years. Th is article reviews recent experimental work delineating the complex developmental trajectory of audiovisual mismatch detection. Th e central issue is the role of age-related changes in visual scanning of audiovisual speech and the corresponding changes in neural signatures of audiovisual speech processing in the second half of the first year of life. Th is phenomenon is discussed in the context of recent theories of perceptual development and existing data on the neural organisation of the infant ‘social brain’.


2001 ◽  
Vol 18 (1) ◽  
pp. 9-21 ◽  
Author(s):  
Tsuhan Chen

2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand

It is widely accepted that seeing a talker improves a listener’s ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone—indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).


2007 ◽  
Vol 121 (5) ◽  
pp. 3044-3044
Author(s):  
Ingo Hertrich ◽  
Hermann Ackermann ◽  
Klaus Mathiak ◽  
Werner Lutzenberger

2018 ◽  
Vol 117 ◽  
pp. 454-471 ◽  
Author(s):  
Ana A. Francisco ◽  
Atsuko Takashima ◽  
James M. McQueen ◽  
Mark van den Bunt ◽  
Alexandra Jesse ◽  
...  

2013 ◽  
Vol 37 (2) ◽  
pp. 90-94 ◽  
Author(s):  
David J. Lewkowicz ◽  
Ferran Pons

Audiovisual speech consists of overlapping and invariant patterns of dynamic acoustic and optic articulatory information. Research has shown that infants can perceive a variety of basic auditory-visual (A-V) relations but no studies have investigated whether and when infants begin to perceive higher order A-V relations inherent in speech. Here, we asked whether and when do infants become capable of recognizing amodal language identity, a critical perceptual skill that is necessary for the development of multisensory communication. Because, at a minimum, such a skill requires the ability to perceive suprasegmental auditory and visual linguistic information, we predicted that this skill would not emerge before higher-level speech processing and multisensory perceptual skills emerge. Consistent with this prediction, we found that recognition of the amodal identity of language emerges at 10–12 months of age but that when it emerges it is restricted to infants’ native language.


Sign in / Sign up

Export Citation Format

Share Document