scholarly journals Contributions of local speech encoding and functional connectivity to audio-visual speech perception

eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Bruno L Giordano ◽  
Robin A A Ince ◽  
Joachim Gross ◽  
Philippe G Schyns ◽  
Stefano Panzeri ◽  
...  

Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments.

2016 ◽  
Author(s):  
Bruno L. Giordano ◽  
Robin A. A. Ince ◽  
Joachim Gross ◽  
Stefano Panzeri ◽  
Philippe G. Schyns ◽  
...  

AbstractSeeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioural benefit arising from seeing the speaker's face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-motor interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments.


2005 ◽  
Vol 24 (2) ◽  
pp. 335-342 ◽  
Author(s):  
Nobuko Kemmotsu ◽  
Michele E. Villalobos ◽  
Michael S. Gaffrey ◽  
Eric Courchesne ◽  
Ralph-Axel Müller

eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Hyojin Park ◽  
Christoph Kayser ◽  
Gregor Thut ◽  
Joachim Gross

During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker’s lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker’s lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing.


Author(s):  
Bruno L Giordano ◽  
Robin A A Ince ◽  
Joachim Gross ◽  
Philippe G Schyns ◽  
Stefano Panzeri ◽  
...  

NeuroImage ◽  
2019 ◽  
Vol 188 ◽  
pp. 43-58 ◽  
Author(s):  
K. Rubia ◽  
M. Criaud ◽  
M. Wulff ◽  
A. Alegria ◽  
H. Brinson ◽  
...  

2021 ◽  
Author(s):  
Davide Nardo ◽  
Katerina Pappa ◽  
John Duncan ◽  
Peter Zeidman ◽  
Martina F. Callaghan ◽  
...  

ABSTRACTThe left inferior frontal cortex (LIFC) is a key region for spoken language processing, but its neurocognitive architecture remains controversial. Here we assess the domain-generality vs. domain-specificity of the LIFC from behavioural, functional neuroimaging and neuromodulation data. Using concurrent fMRI and transcranial direct current stimulation (tDCS) delivered to the LIFC, we investigated how brain activity and behavioural performance are modulated by task domain (naming vs. non-naming), cognitive challenge (low vs. high), and tDCS (anodal vs. sham). The data revealed: (1) co-existence of neural signatures both common and distinct across tasks within the LIFC; (2) domain-preferential effects of task (naming); (3) significant tDCS modulations of activity in a LIFC sub-region selectively during high-challenge naming. The presence of both domain-specific and domain-general signals, and the existence of a gradient of activation where naming relied more on sub-regions within the LIFC, may help reconcile both perspectives on spoken language processing.


NeuroImage ◽  
2005 ◽  
Vol 25 (3) ◽  
pp. 916-925 ◽  
Author(s):  
Michele E. Villalobos ◽  
Akiko Mizuno ◽  
Branelle C. Dahl ◽  
Nobuko Kemmotsu ◽  
Ralph-Axel Müller

Sign in / Sign up

Export Citation Format

Share Document