audiovisual integration
Recently Published Documents


TOTAL DOCUMENTS

349
(FIVE YEARS 101)

H-INDEX

33
(FIVE YEARS 2)

Author(s):  
Zhichao Xia ◽  
Ting Yang ◽  
Xin Cui ◽  
Fumiko Hoeft ◽  
Hong Liu ◽  
...  

Author(s):  
Basil Wahn ◽  
Laura Schmitz ◽  
Alan Kingstone ◽  
Anne Böckler-Raettig

AbstractEye contact is a dynamic social signal that captures attention and plays a critical role in human communication. In particular, direct gaze often accompanies communicative acts in an ostensive function: a speaker directs her gaze towards the addressee to highlight the fact that this message is being intentionally communicated to her. The addressee, in turn, integrates the speaker’s auditory and visual speech signals (i.e., her vocal sounds and lip movements) into a unitary percept. It is an open question whether the speaker’s gaze affects how the addressee integrates the speaker’s multisensory speech signals. We investigated this question using the classic McGurk illusion, an illusory percept created by presenting mismatching auditory (vocal sounds) and visual information (speaker’s lip movements). Specifically, we manipulated whether the speaker (a) moved his eyelids up/down (i.e., open/closed his eyes) prior to speaking or did not show any eye motion, and (b) spoke with open or closed eyes. When the speaker’s eyes moved (i.e., opened or closed) before an utterance, and when the speaker spoke with closed eyes, the McGurk illusion was weakened (i.e., addressees reported significantly fewer illusory percepts). In line with previous research, this suggests that motion (opening or closing), as well as the closed state of the speaker’s eyes, captured addressees’ attention, thereby reducing the influence of the speaker’s lip movements on the addressees’ audiovisual integration process. Our findings reaffirm the power of speaker gaze to guide attention, showing that its dynamics can modulate low-level processes such as the integration of multisensory speech signals.


2021 ◽  
Vol 12 ◽  
Author(s):  
Aleksandra K. Eberhard-Moscicka ◽  
Lea B. Jost ◽  
Moritz M. Daum ◽  
Urs Maurer

Fluent reading is characterized by fast and effortless decoding of visual and phonological information. Here we used event-related potentials (ERPs) and neuropsychological testing to probe the neurocognitive basis of reading in a sample of children with a wide range of reading skills. We report data of 51 children who were measured at two time points, i.e., at the end of first grade (mean age 7.6 years) and at the end of fourth grade (mean age 10.5 years). The aim of this study was to clarify whether next to behavioral measures also basic unimodal and bimodal neural measures help explaining the variance in the later reading outcome. Specifically, we addressed the question of whether next to the so far investigated unimodal measures of N1 print tuning and mismatch negativity (MMN), a bimodal measure of audiovisual integration (AV) contributes and possibly enhances prediction of the later reading outcome. We found that the largest variance in reading was explained by the behavioral measures of rapid automatized naming (RAN), block design and vocabulary (46%). Furthermore, we demonstrated that both unimodal measures of N1 print tuning (16%) and filtered MMN (7%) predicted reading, suggesting that N1 print tuning at the early stage of reading acquisition is a particularly good predictor of the later reading outcome. Beyond the behavioral measures, the two unimodal neural measures explained 7.2% additional variance in reading, indicating that basic neural measures can improve prediction of the later reading outcome over behavioral measures alone. In this study, the AV congruency effect did not significantly predict reading. It is therefore possible that audiovisual congruency effects reflect higher levels of multisensory integration that may be less important for reading acquisition in the first year of learning to read, and that they may potentially gain on relevance later on.


2021 ◽  
Vol 12 ◽  
Author(s):  
Iliana I. Karipidis ◽  
Georgette Pleisch ◽  
Sarah V. Di Pietro ◽  
Gorka Fraga-González ◽  
Silvia Brem

Reading acquisition in alphabetic languages starts with learning the associations between speech sounds and letters. This learning process is related to crucial developmental changes of brain regions that serve visual, auditory, multisensory integration, and higher cognitive processes. Here, we studied the development of audiovisual processing and integration of letter-speech sound pairs with an audiovisual target detection functional MRI paradigm. Using a longitudinal approach, we tested children with varying reading outcomes before the start of reading acquisition (T1, 6.5 yo), in first grade (T2, 7.5 yo), and in second grade (T3, 8.5 yo). Early audiovisual integration effects were characterized by higher activation for incongruent than congruent letter-speech sound pairs in the inferior frontal gyrus and ventral occipitotemporal cortex. Audiovisual processing in the left superior temporal gyrus significantly increased from the prereading (T1) to early reading stages (T2, T3). Region of interest analyses revealed that activation in left superior temporal gyrus (STG), inferior frontal gyrus and ventral occipitotemporal cortex increased in children with typical reading fluency skills, while poor readers did not show the same development in these regions. The incongruency effect bilaterally in parts of the STG and insular cortex at T1 was significantly associated with reading fluency skills at T3. These findings provide new insights into the development of the brain circuitry involved in audiovisual processing of letters, the building blocks of words, and reveal early markers of audiovisual integration that may be predictive of reading outcomes.


2021 ◽  
Author(s):  
Zhichao Xia ◽  
Ting Yang ◽  
Xin Cui ◽  
Fumiko Hoeft ◽  
Hong Liu ◽  
...  

Conquering grapheme-phoneme correspondence is necessary for developing fluent reading in alphabetic orthographies. In neuroimaging research, this ability is associated with brain activation differences between the audiovisual congruent against incongruent conditions, especially in the left superior temporal cortex. Studies have also shown such a neural audiovisual integration effect is reduced in individuals with dyslexia. However, existing evidence is almost restricted to alphabetic languages. Whether and how multisensory processing of print and sound is impaired in Chinese dyslexia remains underexplored. Of note, semantic information is deeply involved in Chinese character processing. In this study, we applied a functional magnetic resonance imaging audiovisual integration paradigm to investigate the possible dysfunctions in processing character-sound pairs and pinyin-sound pairs in Chinese dyslexic children compared with typically developing readers. Unexpectedly, no region displayed significant group difference in the audiovisual integration effect in either the character or pinyin experiment. However, the results revealed atypical correlations between neurofunctional features accompanying audiovisual integration with reading abilities in Chinese children with dyslexia. Specifically, while the audiovisual integration effect in the left inferior cortex in processing character-sound pairs correlated with silent reading comprehension proficiency in both dyslexia and control group, it was associated with morphological awareness in the control group but with rapid naming in dyslexics. As for pinyin-sound associations processing, while the stronger activation in the congruent than incongruent conditions in the left occipito-temporal cortex and bilateral superior temporal cortices was associated with better oral word reading in the control group, an opposite pattern was found in children with dyslexia. On the one hand, this pattern suggests Chinese dyslexic children have yet to develop an efficient grapho-semantic processing system as typically developing children do. On the other hand, it indicates dysfunctional recruitment of the regions that process pinyin-sound pairs in dyslexia, which may impede character learning.


Sign in / Sign up

Export Citation Format

Share Document