Moods alter audiovisual integration

2012 ◽  
Vol 25 (0) ◽  
pp. 130
Author(s):  
Miho Kitamura ◽  
Katsumi Watanabe ◽  
Norimichi Kitagawa

Multisensory integration depends on the temporal proximity of events in different modalities. Recent studies have shown that multisensory temporal binding may be related to individual traits (Foss-Feig et al., 2010; Stevenson et al., 2012). Here we show that positive moods in observers enhance the temporal binding of audiovisual multisensory integration. Twenty-five healthy participants observed two identical visual disks moving toward each other, coinciding, and moving away. The two disks were perceived as either streaming through or bouncing off each other (stream/bounce display), and a belief sound around the visual coincidence facilitated bouncing perception (Sekuler et al., 1997; Watanabe and Shimojo, 2001). We asked the participants to report whether the two disks appeared to stream through or bounce off while listening to either exhilarating music of their own choice or a neutral pink noise. The results showed that the participants listening to exhilarating music reported bouncing percept more frequently. The proportion of bouncing percepts was correlated with the valence rating rather than the arousal rating during the experiment. These results suggest that positive moods enhance the temporal binding process in audiovisual integration.

Perception ◽  
2016 ◽  
Vol 46 (1) ◽  
pp. 6-17 ◽  
Author(s):  
N. Van der Stoep ◽  
S. Van der Stigchel ◽  
T. C. W. Nijboer ◽  
C. Spence

Multisensory integration (MSI) and exogenous spatial attention can both speedup responses to perceptual events. Recently, it has been shown that audiovisual integration at exogenously attended locations is reduced relative to unattended locations. This effect was observed at short cue-target intervals (200–250 ms). At longer intervals, however, the initial benefits of exogenous shifts of spatial attention at the cued location are often replaced by response time (RT) costs (also known as Inhibition of Return, IOR). Given these opposing cueing effects at shorter versus longer intervals, we decided to investigate whether MSI would also be affected by IOR. Uninformative exogenous visual spatial cues were presented between 350 and 450 ms prior to the onset of auditory, visual, and audiovisual targets. As expected, IOR was observed for visual targets (invalid cue RT < valid cue RT). For auditory and audiovisual targets, neither IOR nor any spatial cueing effects were observed. The amount of relative multisensory response enhancement and race model inequality violation was larger for uncued as compared with cued locations indicating that IOR reduces MSI. The results are discussed in the context of changes in unisensory signal strength at cued as compared with uncued locations.


2018 ◽  
Vol 31 (6) ◽  
pp. 523-536 ◽  
Author(s):  
Ayako Yaguchi ◽  
Souta Hidaka

Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by deficits in social communication and interaction, and restricted interests and behavior patterns. These characteristics are considered as a continuous distribution in the general population. People with ASD show atypical temporal processing in multisensory integration. Regarding the flash–beep illusion, which refers to how a single flash can be illusorily perceived as multiple flashes when multiple auditory beeps are concurrently presented, some studies reported that people with ASD have a wider temporal binding window and greater integration than typically developed people; others found the opposite or inconsistent tendencies. Here, we investigated the relationships between the manner of the flash–beep illusion and the various dimensions of ASD traits by estimating the degree of typically developed participants’ ASD traits including five subscales using the Autism-Spectrum Quotient. We found that stronger ASD traits of communication and social skill were associated with a wider and narrower temporal binding window respectively. These results suggest that specific ASD traits are differently involved in the particular temporal binding processes of audiovisual integration.


2012 ◽  
Vol 25 (0) ◽  
pp. 154
Author(s):  
Luis Morís Fernández ◽  
Maya Visser ◽  
Salvador Soto-Faraco

We assessed the role of audiovisual integration in selective attention by testing selective attention to sound. Participants were asked to focus on one audio speech stream out of two audio streams presented simultaneously at different pitch. We measured recall of words from the cued or the uncued sentence using a 2AFC at the end of each trial. A video-clip of the mouth of a speaker was presented in the middle of the display, matching one of the two simultaneous auditory streams (50% of the time it matched the cued sentence and the rest the uncued one). In Experiment 1 the cue was 75% valid. Recall in the valid trials was better than in the invalid ones. The critical result was, however, that only in the valid condition we did find differences between audio–visual matching and audio-visually mismatching sentences. On the invalid condition these differences were not found. In Experiment 2 the cue to the relevant sentence was 100% valid, and we included a control condition where the lips didn’t match either of the sentences. When the lips matched the cued sentence performance was better than when they matched the uncued sentence or none of them, suggesting a benefit of audiovisual matching rather than a cost of mismatch. Our results indicate that attention to acoustic frequency (pitch) plays an important role in what sounds benefit from multisensory integration.


2019 ◽  
Vol 19 (10) ◽  
pp. 19
Author(s):  
Leslie D Kwakye ◽  
Victoria Fisher ◽  
Margaret Jackson ◽  
Oona Jung-Beeman

2018 ◽  
Vol 22 (04) ◽  
pp. 752-762 ◽  
Author(s):  
GAVIN M. BIDELMAN ◽  
SHELLEY T. HEATH

We asked whether bilinguals’ benefits reach beyond the auditory modality to benefit multisensory processing. We measured audiovisual integration of auditory and visual cues in monolinguals and bilinguals via the double-flash illusion where the presentation of multiple auditory stimuli concurrent with a single visual flash induces an illusory perception of multiple flashes. We varied stimulus onset asynchrony (SOA) between auditory and visual cues to measure the “temporal binding window” where listeners fuse a single percept. Bilinguals showed faster responses and were less susceptible to the double-flash illusion than monolinguals. Moreover, monolinguals showed poorer sensitivity in AV processing compared to bilinguals. The width of bilinguals’ AV temporal integration window was narrower than monolinguals’ for both leading and lagging SOAs (Biling.: -65–112 ms; Mono.: -193 – 112 ms). Our results suggest the plasticity afforded by speaking multiple languages enhances multisensory integration and audiovisual binding in the bilingual brain.


Author(s):  
Jingjing Yang ◽  
Qi Li ◽  
Yulin Gao ◽  
Jinglong Wu

In everyday life, our brains integrate various kinds of information from different modalities to perceive our complex environment. Spatial and temporal proximity of multisensory stimuli is required for multisensory integration. Many researches have shown that temporal asynchrony of visual-auditory stimuli can influence multisensory integration. However, the neural mechanisms of asynchrony inputs were not well understood. Some researchers believe that humans have a relatively broad time window, in which stimuli from different modalities and asynchronous inputs tends to be integrated into a single unified percept. Others believe that the human brain can actively coordinate the auditory and visual input so that we do not notice the asynchronous inputs of multisensory stimuli. This review focuses on the question of how the temporal factor affects the processing of audiovisual information.


2020 ◽  
Vol 33 (7) ◽  
pp. 777-791
Author(s):  
Sofia Tagini ◽  
Federica Scarpina ◽  
Massimo Scacchi ◽  
Alessandro Mauro ◽  
Massimiliano Zampini

Abstract Preliminary evidence showed a reduced temporal sensitivity (i.e., larger temporal binding window) to audiovisual asynchrony in obesity. Our aim was to extend this investigation to visuotactile stimuli, comparing individuals of healthy weight and with obesity in a simultaneity judgment task. We verified that individuals with obesity had a larger temporal binding window than healthy-weight individuals, meaning that they tend to integrate visuotactile stimuli over an extended range of stimulus onset asynchronies. We point out that our finding gives evidence in support of a more pervasive impairment of the temporal discrimination of co-occurrent stimuli, which might affect multisensory integration in obesity. We discuss our results referring to the possible role of atypical oscillatory neural activity and structural anomalies in affecting the perception of simultaneity between multisensory stimuli in obesity. Finally, we highlight the urgency of a deeper understanding of multisensory integration in obesity at least for two reasons. First, multisensory bodily illusions might be used to manipulate body dissatisfaction in obesity. Second, multisensory integration anomalies in obesity might lead to a dissimilar perception of food, encouraging overeating behaviours.


2020 ◽  
Vol 82 (7) ◽  
pp. 3490-3506
Author(s):  
Jonathan Tong ◽  
Lux Li ◽  
Patrick Bruns ◽  
Brigitte Röder

Abstract According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.


Sign in / Sign up

Export Citation Format

Share Document