multisensory enhancement
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 1)

H-INDEX

14
(FIVE YEARS 0)

Author(s):  
Scott A. Smyre ◽  
Zhengyang Wang ◽  
Barry E. Stein ◽  
Benjamin A. Rowland




2020 ◽  
Author(s):  
Aisling E. O’Sullivan ◽  
Michael J. Crosse ◽  
Giovanni M. Di Liberto ◽  
Alain de Cheveigné ◽  
Edmund C. Lalor

AbstractSeeing a speaker’s face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker’s face provides temporal cues to auditory cortex, and articulatory information from the speaker’s mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However it remains unclear how the integration of these cues varies as a function of listening conditions. Here we sought to provide insight on these questions by examining EEG responses to natural audiovisual, audio, and visual speech in quiet and in noise. Specifically, we represented our speech stimuli in terms of their spectrograms and their phonetic features, and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis. The encoding of both spectrotemporal and phonetic features was shown to be more robust in audiovisual speech responses then what would have been expected from the summation of the audio and visual speech responses, consistent with the literature on multisensory integration. Furthermore, the strength of this multisensory enhancement was more pronounced at the level of phonetic processing for speech in noise relative to speech in quiet, indicating that listeners rely more on articulatory details from visual speech in challenging listening conditions. These findings support the notion that the integration of audio and visual speech is a flexible, multistage process that adapts to optimize comprehension based on the current listening conditions.Significance StatementDuring conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here we examine audiovisual integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how audiovisual integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions, and when the speech is noisy, we find enhanced integration at the phonetic stage of processing. These findings provide support for the multistage integration framework and demonstrate its flexibility in terms of a greater reliance on visual articulatory information in challenging listening conditions.



Vision ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 12
Author(s):  
Hiu Mei Chow ◽  
Xenia Leviyah ◽  
Vivian M. Ciaramitaro

While previous research has investigated key factors contributing to multisensory integration in isolation, relatively little is known regarding how these factors interact, especially when considering the enhancement of visual contrast sensitivity by a task-irrelevant sound. Here we explored how auditory stimulus properties, namely salience and temporal phase coherence in relation to the visual target, jointly affect the extent to which a sound can enhance visual contrast sensitivity. Visual contrast sensitivity was measured by a psychophysical task, where human adult participants reported the location of a visual Gabor pattern presented at various contrast levels. We expected the most enhanced contrast sensitivity, the lowest contrast threshold, when the visual stimulus was accompanied by a task-irrelevant sound, weak in auditory salience, modulated in-phase with the visual stimulus (strong temporal phase coherence). Our expectations were confirmed, but only if we accounted for individual differences in optimal auditory salience level to induce maximal multisensory enhancement effects. Our findings highlight the importance of interactions between temporal phase coherence and stimulus effectiveness in determining the strength of multisensory enhancement of visual contrast as well as highlighting the importance of accounting for individual differences.



Neuroscience ◽  
2019 ◽  
Vol 418 ◽  
pp. 254-265 ◽  
Author(s):  
Danja K. Porada ◽  
Christina Regenbogen ◽  
Janina Seubert ◽  
Jessica Freiherr ◽  
Johan N. Lundström


Cognition ◽  
2019 ◽  
Vol 187 ◽  
pp. 38-49 ◽  
Author(s):  
J. Lunn ◽  
A. Sjoblom ◽  
J. Ward ◽  
S. Soto-Faraco ◽  
S. Forster




2018 ◽  
Vol 120 (1) ◽  
pp. 139-148
Author(s):  
Makoto Someya ◽  
Hiroto Ogawa

Detecting predators is crucial for survival. In insects, a few sensory interneurons receiving sensory input from a distinct receptive organ extract specific features informing the animal about approaching predators and mediate avoidance behaviors. Although integration of multiple sensory cues relevant to the predator enhances sensitivity and precision, it has not been established whether the sensory interneurons that act as predator detectors integrate multiple modalities of sensory inputs elicited by predators. Using intracellular recording techniques, we found that the cricket auditory neuron AN2, which is sensitive to the ultrasound-like echolocation calls of bats, responds to airflow stimuli transduced by the cercal organ, a mechanoreceptor in the abdomen. AN2 enhanced spike outputs in response to cross-modal stimuli combining sound with airflow, and the linearity of the summation of multisensory integration depended on the magnitude of the evoked response. The enhanced AN2 activity contained bursts, triggering avoidance behavior. Moreover, cross-modal stimuli elicited larger and longer lasting excitatory postsynaptic potentials (EPSP) than unimodal stimuli, which would result from a sublinear summation of EPSPs evoked respectively by sound or airflow. The persistence of EPSPs was correlated with the occurrence and structure of burst activity. Our findings indicate that AN2 integrates bimodal signals and that multisensory integration rather than unimodal stimulation alone more reliably generates bursting activity. NEW & NOTEWORTHY Crickets detect ultrasound with their tympanum and airflow with their cercal organ and process them as alert signals of predators. These sensory signals are integrated by auditory neuron AN2 in the early stages of sensory processing. Multisensory inputs from different sensory channels enhanced excitatory postsynaptic potentials to facilitate burst firing, which could trigger avoidance steering in flying crickets. Our results highlight the cellular basis of multisensory integration in AN2 and possible effects on escape behavior.



2018 ◽  
Vol 11 ◽  
Author(s):  
Eva C. Bach ◽  
John W. Vaughan ◽  
Barry E. Stein ◽  
Benjamin A. Rowland


2017 ◽  
Vol 236 (2) ◽  
pp. 409-417 ◽  
Author(s):  
Ayla Barutchu ◽  
Charles Spence ◽  
Glyn W. Humphreys


Sign in / Sign up

Export Citation Format

Share Document