On the relative contributions of multisensory integration and crossmodal exogenous spatial attention to multisensory response enhancement

2015 ◽  
Vol 162 ◽  
pp. 20-28 ◽  
Author(s):  
N. Van der Stoep ◽  
C. Spence ◽  
T.C.W. Nijboer ◽  
S. Van der Stigchel
Perception ◽  
2016 ◽  
Vol 46 (1) ◽  
pp. 6-17 ◽  
Author(s):  
N. Van der Stoep ◽  
S. Van der Stigchel ◽  
T. C. W. Nijboer ◽  
C. Spence

Multisensory integration (MSI) and exogenous spatial attention can both speedup responses to perceptual events. Recently, it has been shown that audiovisual integration at exogenously attended locations is reduced relative to unattended locations. This effect was observed at short cue-target intervals (200–250 ms). At longer intervals, however, the initial benefits of exogenous shifts of spatial attention at the cued location are often replaced by response time (RT) costs (also known as Inhibition of Return, IOR). Given these opposing cueing effects at shorter versus longer intervals, we decided to investigate whether MSI would also be affected by IOR. Uninformative exogenous visual spatial cues were presented between 350 and 450 ms prior to the onset of auditory, visual, and audiovisual targets. As expected, IOR was observed for visual targets (invalid cue RT < valid cue RT). For auditory and audiovisual targets, neither IOR nor any spatial cueing effects were observed. The amount of relative multisensory response enhancement and race model inequality violation was larger for uncued as compared with cued locations indicating that IOR reduces MSI. The results are discussed in the context of changes in unisensory signal strength at cued as compared with uncued locations.


2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2019 ◽  
Vol 31 (5) ◽  
pp. 699-710
Author(s):  
Adele Diederich ◽  
Hans Colonius

Although it is well documented that occurrence of an irrelevant and nonpredictive sound facilitates motor responses to a subsequent target light appearing nearby, the cause of this “exogenous spatial cuing effect” has been under discussion. On the one hand, it has been postulated to be the result of a shift of visual spatial attention possibly triggered by parietal and/or cortical supramodal “attention” structures. On the other hand, the effect has been considered to be due to multisensory integration based on the activation of multisensory convergence structures in the brain. Recent RT experiments have suggested that multisensory integration and exogenous spatial cuing differ in their temporal profiles of facilitation: When the nontarget occurs 100–200 msec before the target, facilitation is likely driven by crossmodal exogenous spatial attention, whereas multisensory integration effects are still seen when target and nontarget are presented nearly simultaneously. Here, we develop an extension of the time-window-of-integration model that combines both mechanisms within the same formal framework. The model is illustrated by fitting it to data from a focused attention task with a visual target and an auditory nontarget presented at horizontally or vertically varying positions. Results show that both spatial cuing and multisensory integration may coexist in a single trial in bringing about the crossmodal facilitation of RT effects. Moreover, the formal analysis via time window of integration allows to predict and quantify the contribution of either mechanism as they occur across different spatiotemporal conditions.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 35-35 ◽  
Author(s):  
M T Wallace

Multisensory integration in the superior colliculus (SC) of the cat requires a protracted postnatal developmental time course. Kittens 3 – 135 days postnatal (dpn) were examined and the first neuron capable of responding to two different sensory inputs (auditory and somatosensory) was not seen until 12 dpn. Visually responsive multisensory neurons were not encountered until 20 dpn. These early multisensory neurons responded weakly to sensory stimuli, had long response latencies, large receptive fields, and poorly developed response selectivities. Most striking, however, was their inability to integrate cross-modality cues in order to produce the significant response enhancement or depression characteristic of these neurons in adults. The incidence of multisensory neurons increased gradually over the next 10 – 12 weeks. During this period, sensory responses became more robust, latencies shortened, receptive fields decreased in size, and unimodal selectivities matured. The first neurons capable of cross-modality integration were seen at 28 dpn. For the following two months, the incidence of such integrative neurons rose gradually until adult-like values were achieved. Surprisingly, however, as soon as a multisensory neuron exhibited this capacity, most of its integrative features were indistinguishable from those in adults. Given what is known about the requirements for multisensory integration in adult animals, this observation suggests that the appearance of multisensory integration reflects the onset of functional corticotectal inputs.


1996 ◽  
Vol 76 (2) ◽  
pp. 1246-1266 ◽  
Author(s):  
M. T. Wallace ◽  
L. K. Wilkinson ◽  
B. E. Stein

1. The properties of visual-, auditory-, and somatosensory-responsive neurons, as well as of neurons responsive to multiple sensory cues (i.e., multisensory), were examined in the superior colliculus of the rhesus monkey. Although superficial layer neurons responded exclusively to visual stimuli and visual inputs predominated in deeper layers, there was also a rich nonvisual and multisensory representation in the superior colliculus. More than a quarter (27.8%) of the deep layer population responded to stimuli from more than a single sensory modality. In contrast, 37% responded only to visual cues, 17.6% to auditory cues, and 17.6% to somatosensory cues. Unimodal- and multisensory-responsive neurons were clustered by modality. Each of these modalities was represented in map-like fashion, and the different representations were in alignment with one another. 2. Most deep layer visually responsive neurons were binocular and exhibited poor selectivity for such stimulus characteristics as orientation, velocity, and direction of movement. Similarly, most auditory-responsive neurons had contralateral receptive fields and were binaural, but had little frequency selectivity and preferred complex, broad-band sounds. Somatosensory-responsive neurons were overwhelmingly contralateral, high velocity, and rapidly adapting. Only rarely did somatosensory-responsive neurons require distortion of subcutaneous tissue for activation. 3. The spatial congruence among the different receptive fields of multisensory neurons was a critical feature underlying their ability to synthesize cross-modal information. 4. Combinations of stimuli could have very different consequences in the same neuron, depending on their temporal and spatial relationships. Generally, multisensory interactions were evident when pairs of stimuli were separated from one another by < 500 ms, and the products of these interactions far exceeded the sum of their unimodal components. Whether the combination of stimuli produced response enhancement, response depression, or no interaction depended on the location of the stimuli relative to one another and to their respective receptive fields. Maximal response enhancements were observed when stimuli originated from similar locations in space (as when derived from the same event) because they fell within the excitatory receptive fields of the same multisensory neurons. If, however, the stimuli were spatially disparate such that one fell beyond the excitatory borders of its receptive field, either no interaction was produced or this stimulus depressed the effectiveness of the other. Furthermore, maximal response interactions were seen with the pairing of weakly effective unimodal stimuli. As the individual unimodal stimuli became increasingly effective, the levels of response enhancement to stimulus combinations declined, a principle referred to as inverse effectiveness. Many of the integrative principles seen here in the primate superior colliculus are strikingly similar to those observed in the cat. These observations indicate that a set of common principles of multisensory integration is adaptable in widely divergent species living in very different ecological situations. 5. Surprisingly, a few multisensory neurons had individual receptive fields that were not in register with one another. This has not been noted in multisensory neurons of other species, and these "anomalous" receptive fields could present a daunting problem: stimuli originating from the same general location in space cannot simultaneously fall within their respective receptive fields, a stimulus pairing that may result in response depression. Conversely, stimuli that originate from separate events and disparate locations (and fall within their receptive fields) may result in response enhancement. However, the spatial principle of multisensory integration did not apply in these cases. (ABSTRACT TRUNCATED)


2013 ◽  
Vol 26 (1-2) ◽  
pp. 209
Author(s):  
Jan Bennemann ◽  
Claudia Freigang ◽  
Marc Stöhr ◽  
Rudolf Rübsamen

2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2018 ◽  
Vol 41 ◽  
Author(s):  
Jean-Paul Noel

AbstractWithin a multisensory context, “optimality” has been used as a benchmark evidencing interdependent sensory channels. However, “optimality” does not truly bifurcate a spectrum from suboptimal to supra-optimal – where optimal and supra-optimal, but not suboptimal, indicate integration – as supra-optimality may result from the suboptimal integration of a present unisensory stimuli and an absent one (audio = audio + absence of vision).


Sign in / Sign up

Export Citation Format

Share Document