scholarly journals Visually Induced Inhibition of Return Affects the Integration of Auditory and Visual Information

Perception ◽  
2016 ◽  
Vol 46 (1) ◽  
pp. 6-17 ◽  
Author(s):  
N. Van der Stoep ◽  
S. Van der Stigchel ◽  
T. C. W. Nijboer ◽  
C. Spence

Multisensory integration (MSI) and exogenous spatial attention can both speedup responses to perceptual events. Recently, it has been shown that audiovisual integration at exogenously attended locations is reduced relative to unattended locations. This effect was observed at short cue-target intervals (200–250 ms). At longer intervals, however, the initial benefits of exogenous shifts of spatial attention at the cued location are often replaced by response time (RT) costs (also known as Inhibition of Return, IOR). Given these opposing cueing effects at shorter versus longer intervals, we decided to investigate whether MSI would also be affected by IOR. Uninformative exogenous visual spatial cues were presented between 350 and 450 ms prior to the onset of auditory, visual, and audiovisual targets. As expected, IOR was observed for visual targets (invalid cue RT < valid cue RT). For auditory and audiovisual targets, neither IOR nor any spatial cueing effects were observed. The amount of relative multisensory response enhancement and race model inequality violation was larger for uncued as compared with cued locations indicating that IOR reduces MSI. The results are discussed in the context of changes in unisensory signal strength at cued as compared with uncued locations.

2019 ◽  
Vol 31 (5) ◽  
pp. 699-710
Author(s):  
Adele Diederich ◽  
Hans Colonius

Although it is well documented that occurrence of an irrelevant and nonpredictive sound facilitates motor responses to a subsequent target light appearing nearby, the cause of this “exogenous spatial cuing effect” has been under discussion. On the one hand, it has been postulated to be the result of a shift of visual spatial attention possibly triggered by parietal and/or cortical supramodal “attention” structures. On the other hand, the effect has been considered to be due to multisensory integration based on the activation of multisensory convergence structures in the brain. Recent RT experiments have suggested that multisensory integration and exogenous spatial cuing differ in their temporal profiles of facilitation: When the nontarget occurs 100–200 msec before the target, facilitation is likely driven by crossmodal exogenous spatial attention, whereas multisensory integration effects are still seen when target and nontarget are presented nearly simultaneously. Here, we develop an extension of the time-window-of-integration model that combines both mechanisms within the same formal framework. The model is illustrated by fitting it to data from a focused attention task with a visual target and an auditory nontarget presented at horizontally or vertically varying positions. Results show that both spatial cuing and multisensory integration may coexist in a single trial in bringing about the crossmodal facilitation of RT effects. Moreover, the formal analysis via time window of integration allows to predict and quantify the contribution of either mechanism as they occur across different spatiotemporal conditions.


Perception ◽  
1998 ◽  
Vol 27 (6) ◽  
pp. 737-754 ◽  
Author(s):  
Stephen Lakatos ◽  
Lawrence E Marks

To what extent can individuals accurately estimate the angle between two surfaces through touch alone, and how does tactile judgment compare to visual judgment? Subjects' ability to estimate angle size for a variety of haptic and visual stimuli was examined in a series of nine experiments. Triangular wooden blocks and raised contour outlines comprising different angles and radii of curvature at the apex were used in experiments 1 – 4 and it was found that subjects consistently underestimated angular extent relative to visual baselines and that the degree of underestimation was inversely related to the actual size of the angle. Angle estimates also increased with increasing radius of curvature when actual angle size was held constant. In contrast, experiments 5–8 showed that subjects did not underestimate angular extent when asked to perform a haptic – visual match to a computerized visual image; this outcome suggests that visual input may ‘recalibrate’ the haptic system's internal metric for estimating angle. The basis of this crossmodal interaction was investigated in experiment 9 by varying the nature and extent of visual cues available in haptic estimation tasks. The addition of visual-spatial cues did not significantly reduce the magnitude of haptic underestimation. The experiments as a whole indicate that haptic underestimations of angle occur in a number of different stimulus contexts, but leave open the question of exactly what type of visual information may serve to recalibrate touch in this regard.


2021 ◽  
Author(s):  
Philip Coen ◽  
Timothy PH Sit ◽  
Miles J Wells ◽  
Matteo Carandini ◽  
Kenneth D Harris

To interpret the world and make accurate perceptual decisions, the brain must combine information across sensory modalities. For instance, it must combine vision and hearing to localize objects based on their image and sound. Probability theory suggests that evidence from multiple independent cues should be combined additively, but it is unclear whether mice and other mammals do this, and the cortical substrates of multisensory integration are uncertain. Here we show that to localize a stimulus, mice combine auditory and visual spatial cues additively, a computation supported by unisensory processing in auditory and visual cortex and additive multisensory integration in frontal cortex. We developed an audiovisual localization task where mice turn a wheel to indicate the joint position of an image and a sound. Scanning optogenetic inactivation of dorsal cortex showed that auditory and visual areas contribute unisensory information, whereas frontal cortex (secondary motor area, MOs) contributes multisensory information to the decision of the mouse. Neuropixels recordings of >10,000 neurons in frontal cortex indicated that neural activity in MOs reflects an additive combination of visual and auditory signals. An accumulator model applied to the sensory representations of MOs neurons reproduced behaviourally observed choices and reaction times. This suggests that MOs integrates information from multiple sensory cortices, providing a signal that is then transformed into a binary decision by a downstream accumulator.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yanna Ren ◽  
Yawei Hou ◽  
Jiayu Huang ◽  
Fanghong Li ◽  
Tao Wang ◽  
...  

The modulation of attentional load on the perception of auditory and visual information has been widely reported; however, whether attentional load alters audiovisual integration (AVI) has seldom been investigated. Here, to explore the effect of sustained auditory attentional load on AVI and the effects of aging, nineteen older and 20 younger adults performed an AV discrimination task with a rapid serial auditory presentation task competing for attentional resources. The results showed that responses to audiovisual stimuli were significantly faster than those to auditory and visual stimuli ( AV > V ≥ A , all p < 0.001 ), and the younger adults were significantly faster than the older adults under all attentional load conditions (all p < 0.001 ). The analysis of the race model showed that AVI was decreased and delayed with the addition of auditory sustained attention ( no _ load > load _ 1 > load _ 2 > load _ 3 > load _ 4 ) for both older and younger adults. In addition, AVI was lower and more delayed in older adults than in younger adults in all attentional load conditions. These results suggested that auditory sustained attentional load decreased AVI and that AVI was reduced in older adults.


2019 ◽  
Author(s):  
Alex L. White ◽  
Geoffrey M. Boynton ◽  
Jason D. Yeatman

Interacting with a cluttered and dynamic environment requires making decisions about visual information at relevant locations while ignoring irrelevant locations. Typical adults can do this with covert spatial attention: prioritizing particular visual field locations even without moving the eyes. Deficits of covert spatial attention have been implicated in developmental dyslexia, a specific reading disability. Previous studies of children with dyslexia, however, have been complicated by group differences in overall task ability that are difficult to distinguish from selective spatial attention. Here, we used a single-fixation visual search task to estimate orientation discrimination thresholds with and without an informative spatial cue in a large sample (N=123) of people ranging in age from 5 to 70 years and with a wide range of reading abilities. We assessed the efficiency of attentional selection via the cueing effect: the difference in log thresholds with and without the spatial cue. Across our whole sample, both absolute thresholds and the cueing effect gradually improved throughout childhood and adolescence. Compared to typical readers, individuals with dyslexia had higher thresholds (worse orientation discrimination) as well as smaller cueing effects (weaker attentional selection). Those differences in dyslexia were especially pronounced prior to age 20, when basic visual function is still maturing. Thus, in line with previous theories, literacy skills are associated with the development of selective spatial attention.


1991 ◽  
Vol 3 (4) ◽  
pp. 345-350 ◽  
Author(s):  
Anne Boylan Clohessy ◽  
Michael I. Posner ◽  
Mary K. Rothbart ◽  
Shaun P. Vecera

The posterior visual spatial attention system involves a number of separable computations that allow orienting to visual locations. We have studied one of these computations, inhibition of return, in 3-, 4-, 6-, 12-, and 18--month-old infants and adults. Our results indicate that this computation develops rapidly between 3 and 6 months, in conjunction with the ability to program eye movements to specific locations. These findings demonstrate that an attention computation involving the mid-brain eye movement system develops after the third month of life. We suggest how this development might influence the infant's ability to represent and expect visual objects.


2020 ◽  
Author(s):  
Sreenivasan Meyyappan ◽  
Abhijit Rajan ◽  
George R Mangun ◽  
Mingzhou Ding

ABSTRACTFeature-based attention refers to preferential selection and processing of items and objects based on their non-spatial attributes such as color or shape. While it is intuitively an easier form of attention to relate to in our day to day lives, the neural mechanisms of feature-based attention are not well understood. Studies have long implicated the dorsal attention network as a key control system for voluntary spatial, feature and object-based attention. Recent studies have expanded on this model by focusing on the inferior frontal junction (IFJ), a region in the pre-frontal cortex to be the source of feature attention control, but not spatial attention control. However, the extent to which IFJ contributes to spatial attention remains a topic of debate. We investigated the role of IFJ in the control of feature versus spatial attention in a cued visual spatial (attend left or right) and feature attention (attend red or green) task using fMRI. Analyzing single-trial cue-evoked fMRI responses using univariate GLM and multi-voxel pattern analysis (MVPA), we observed the following. First, the univariate BOLD activation responses yielded no significant differences between feature and spatial cues. Second, MVPA analysis showed above chance level decoding in classifying feature attention (attend-red vs. attend-green) in both the left and right IFJ, whereas during spatial attention (attend-left vs. attend-right) decoding was at chance. Third, while the cue-evoked decoding accuracy was significant for both left and right IFJ during feature attention, target stimulus-evoked neural responses were not different. Importantly, only the connectivity patterns from the right IFJ was predictive of target-evoked activity in visual cortex (V4); this was true for both left and right V4. Finally, the strength of this connectivity between right IFJ and V4 (bilaterally) was found to be predictive of behavioral performance. These results support a model where the right IFJ plays a crucial role in top down control of feature but not spatial attention.


2000 ◽  
Vol 53 (1) ◽  
pp. 105-130 ◽  
Author(s):  
Michel Schmitt ◽  
Albert Postma ◽  
Edward De Haan

Six experiments were carried out to investigate the issue of cross-modality between exogenous auditory and visual spatial attention employing Posner's cueing paradigm in detection, localization, and discrimination tasks. Results indicated cueing in detection tasks with visual or auditory cues and visual targets but not with auditory targets (Experiment 1). In the localization tasks, cueing was found with both visual and auditory targets. Inhibition of return was apparent only in the within-modality conditions (Experiment 2). This suggests that it is important whether the attention system is activated directly (within a modality) or indirectly (between modalities). Increasing the cue validity from 50% to 80% influenced performance only in the localization task (Experiment 4). These findings are interpreted as being indicative for modality-specific but interacting attention mechanisms. The results of Experiments 5 and 6 (up/down discrimination tasks) also show cross-modal cueing but not with visual cues and auditory targets. Furthermore, there was no inhibition of return in any condition. This suggests that some cueing effects might be task dependent.


Sign in / Sign up

Export Citation Format

Share Document