scholarly journals Recurrent network for multisensory integration-identification of common sources of audiovisual stimuli

Author(s):  
Itsuki Yamashita ◽  
Kentaro Katahira ◽  
Yasuhiko Igarashi ◽  
Kazuo Okanoya ◽  
Masato Okada
2020 ◽  
Vol 11 ◽  
Author(s):  
Ayla Barutchu ◽  
Charles Spence

Multisensory integration can alter information processing, and previous research has shown that such processes are modulated by sensory switch costs and prior experience (e.g., semantic or letter congruence). Here we report an incidental finding demonstrating, for the first time, the interplay between these processes and experimental factors, specifically the presence (vs. absence) of the experimenter in the testing room. Experiment 1 demonstrates that multisensory motor facilitation in response to audiovisual stimuli (circle and tone with no prior learnt associations) is higher in those trials in which the sensory modality switches than when it repeats. Those participants who completed the study while alone exhibited increased RT variability. Experiment 2 replicated these findings using the letters “b” and “d” presented as unisensory stimuli or congruent and incongruent multisensory stimuli (i.e., grapheme-phoneme pairs). Multisensory enhancements were inflated following a sensory switch; that is, congruent and incongruent multisensory stimuli resulted in significant gains following a sensory switch in the monitored condition. However, when the participants were left alone, multisensory enhancements were only observed for repeating incongruent multisensory stimuli. These incidental findings therefore suggest that the effects of letter congruence and sensory switching on multisensory integration are partly modulated by the presence of an experimenter.


2019 ◽  
Vol 31 (8) ◽  
pp. 1155-1172 ◽  
Author(s):  
Jean-Paul Noel ◽  
Andrea Serino ◽  
Mark T. Wallace

The actionable space surrounding the body, referred to as peripersonal space (PPS), has been the subject of significant interest of late within the broader framework of embodied cognition. Neurophysiological and neuroimaging studies have shown the representation of PPS to be built from visuotactile and audiotactile neurons within a frontoparietal network and whose activity is modulated by the presence of stimuli in proximity to the body. In contrast to single-unit and fMRI studies, an area of inquiry that has received little attention is the EEG characterization associated with PPS processing. Furthermore, although PPS is encoded by multisensory neurons, to date there has been no EEG study systematically examining neural responses to unisensory and multisensory stimuli, as these are presented outside, near, and within the boundary of PPS. Similarly, it remains poorly understood whether multisensory integration is generally more likely at certain spatial locations (e.g., near the body) or whether the cross-modal tactile facilitation that occurs within PPS is simply due to a reduction in the distance between sensory stimuli when close to the body and in line with the spatial principle of multisensory integration. In the current study, to examine the neural dynamics of multisensory processing within and beyond the PPS boundary, we present auditory, visual, and audiovisual stimuli at various distances relative to participants' reaching limit—an approximation of PPS—while recording continuous high-density EEG. We question whether multisensory (vs. unisensory) processing varies as a function of stimulus–observer distance. Results demonstrate a significant increase of global field power (i.e., overall strength of response across the entire electrode montage) for stimuli presented at the PPS boundary—an increase that is largest under multisensory (i.e., audiovisual) conditions. Source localization of the major contributors to this global field power difference suggests neural generators in the intraparietal sulcus and insular cortex, hubs for visuotactile and audiotactile PPS processing. Furthermore, when neural dynamics are examined in more detail, changes in the reliability of evoked potentials in centroparietal electrodes are predictive on a subject-by-subject basis of the later changes in estimated current strength at the intraparietal sulcus linked to stimulus proximity to the PPS boundary. Together, these results provide a previously unrealized view into the neural dynamics and temporal code associated with the encoding of nontactile multisensory around the PPS boundary.


2012 ◽  
Vol 25 (0) ◽  
pp. 83
Author(s):  
Miketa Arvaniti ◽  
Noam Sagiv ◽  
Lucille Lecoutre ◽  
Argiro Vatakis

Our research project aimed at investigating multisensory temporal integration in synesthesia and explore whether or not there are commonalities in the sensory experiences of synesthetes and non-synesthetes. Specifically, we investigated whether or not synesthetes are better integrators than non-synesthetes by examining the strength of multisensory binding (i.e., the unity effect) using an unspeeded temporal order judgment task. We used audiovisual stimuli based on grapheme-colour synesthetic associations (Experiment 1) and on crossmodal correspondences (e.g., high-pitch — light colours; Experiment 2) presented at various stimulus onset asynchronies (SOAs) with the method of constant stimuli. Presentation of these stimuli in congruent and incongruent format allowed us to examine whether congruent stimuli lead to a stronger unity effect than incongruent ones in synesthetes and non-synesthetes and, thus, whether synesthetes experience enhanced multisensory integration than non-synesthetes. Preliminary data support the hypothesis that congruent crossmodal correspondences lead to a stronger unity effect than incongruent ones in both groups, with this effect being stronger in synesthetes than non-synesthetes. We also found that synesthetes experience stronger unity effect when presented with idiosyncratically congruent grapheme-colour associations than in incongruent ones as compared to non-synesthetes trained in certain grapheme-colour associations. Currently, we are investigating (Experiment 3) whether trained non-synesthetes exhibit enhanced integration when presented with synesthetic associations that occur frequently among synesthetes. Utilizing this design we will provide psychophysical evidence of the multisensory integration in synesthesia and the possible common processing mechanisms in synesthetes and non-synesthetes.


2021 ◽  
pp. 030573562097869
Author(s):  
Alice Mado Proverbio ◽  
Francesca Russo

We investigated through electrophysiological recordings how music-induced emotions are recognized and combined with the emotional content of written sentences. Twenty-four sad, joyful, and frightening musical tracks were presented to 16 participants reading 270 short sentences conveying a sad, joyful, or frightening emotional meaning. Audiovisual stimuli could be emotionally congruent or incongruent with each other; participants were asked to pay attention and respond to filler sentences containing cities’ names, while ignoring the rest. The amplitude values of event-related potentials (ERPs) were subjected to repeated measures ANOVAs. Distinct electrophysiological markers were identified for the processing of stimuli inducing fear (N450, either linguistic or musical), for language-induced sadness (P300) and for joyful music (positive P2 and LP potentials). The music/language emotional discordance elicited a large N400 mismatch response ( p = .032). Its stronger intracranial source was the right superior temporal gyrus (STG) devoted to multisensory integration of emotions. The results suggest that music can communicate emotional meaning as distinctively as language.


Neuroreport ◽  
2012 ◽  
Vol 23 (10) ◽  
pp. 616-620 ◽  
Author(s):  
Jinglong Wu ◽  
Weiping Yang ◽  
Yulin Gao ◽  
Takahiro Kimura

2020 ◽  
Vol 82 (7) ◽  
pp. 3490-3506
Author(s):  
Jonathan Tong ◽  
Lux Li ◽  
Patrick Bruns ◽  
Brigitte Röder

Abstract According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.


Neuroreport ◽  
2012 ◽  
Vol 23 (10) ◽  
pp. 616-620 ◽  
Author(s):  
Jinglong Wu ◽  
Weiping Yang ◽  
Yulin Gao ◽  
Takahiro Kimura

Sign in / Sign up

Export Citation Format

Share Document