scholarly journals Predictive remapping of visual features beyond saccadic targets

2018 ◽  
Author(s):  
Tao He ◽  
Matthias Fritsche ◽  
Floris P. de Lange

AbstractVisual stability is thought to be mediated by predictive remapping of the relevant object information from its current, pre-saccadic locations to its future, post-saccadic location on the retina. However, it is heavily debated whether and what feature information is predictively remapped during the pre-saccadic interval. Using an orientation adaptation paradigm, we investigated whether predictive remapping occurs for stimulus features and whether adaptation itself is remapped. We found strong evidence for predictive remapping of a stimulus presented shortly before saccade onset, but no remapping of adaptation. Furthermore, we establish that predictive remapping also occurs for stimuli that are not saccade targets, pointing toward a ‘forward remapping’ process operating across the whole visual field. Together, our findings suggest that predictive feature remapping of object information plays an important role in mediating visual stability.

2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


2021 ◽  
Author(s):  
Miao Li ◽  
Bert Reynvoet ◽  
Bilge Sayim

Humans can estimate the number of visually displayed items without counting. This capacity of numerosity perception has often been attributed to a dedicated system to estimate numerosity, or alternatively to the exploitation of various stimulus features, such as density, convex hull, the size of items and occupancy area. The distribution of the presented items is usually not varied with eccentricity in the visual field. However, our visual fields are highly asymmetric, and to date, it is unclear how inhomogeneities of the visual field impact numerosity perception. Besides eccentricity, a pronounced asymmetry is the radial-tangential anisotropy. For example, in crowding, radially placed flankers interfere more strongly with target perception than tangentially placed flankers. Similarly, in redundancy masking, the number of perceived items in repeating patterns is reduced when the items are arranged radially but not when they are arranged tangentially. Here, we investigated whether numerosity perception is subject to the radial-tangential anisotropy of spatial vision to shed light on the underlying topology of numerosity perception. Observers were presented with varying numbers of discs and asked to report the perceived number. There were two conditions. Discs were predominantly arranged radially in the “radial” condition and tangentially in the “tangential” condition. Additionally, the spacing between discs was scaled with eccentricity. Physical properties, such as average eccentricity, average spacing, convex hull, and density were kept as similar as possible in the two conditions. Radial arrangements were expected to yield underestimation compared to tangential arrangements. Consistent with the hypothesis, numerosity estimates in the radial condition were lower compared to the tangential condition. Magnitudes of radial alignment (as well as predicted crowding strength) correlated with the observed numerosity estimates. Our results demonstrate a robust radial-tangential anisotropy, suggesting that the topology of spatial vision determines numerosity estimation. We suggest that asymmetries of spatial vision should be taken into account when investigating numerosity estimation.


2019 ◽  
Vol 5 (7) ◽  
pp. eaaw4358 ◽  
Author(s):  
Philip A. Kragel ◽  
Marianne C. Reddan ◽  
Kevin S. LaBar ◽  
Tor D. Wager

Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.


2018 ◽  
Author(s):  
Julia Bergelt ◽  
Fred H. Hamker

While scanning our environment, the retinal image changes with every saccade. Nevertheless, the visual system anticipates where an attended target will be next and attention is updated to the new location. Recently, two different types of perisaccadic attentional updates were discovered: Predictive remapping of attention before saccade onset (Rolfs, Jonikaitis, Deubel, & Cavanagh, 2011) as well as lingering of attention after saccade (Golomb, Chun, & Mazer, 2008; Golomb, Pulido, Albrecht, Chun, & Mazer, 2010). We here propose a neuro-computational model located in LIP based on a previous model of perisaccadic space perception (Ziesche & Hamker, 2011, 2014). Our model can account for both types of updating of attention at a neural systems level. The lingering effect originates from the late updating of the proprioceptive eye position signal and the remapping from the early corollary discharge signal. We put these results in relationship to predictive remapping of receptive fields and show that both phenomena arise from the same simple, recurrent neural circuit. Thus, together with the previously published results, the model provides a comprehensive framework to discuss multiple experimental observations that occur around saccades.


2019 ◽  
Author(s):  
Cooper A. Smout ◽  
Matthew F. Tang ◽  
Marta I. Garrido ◽  
Jason B. Mattingley

AbstractThe human brain is thought to optimise the encoding of incoming sensory information through two principal mechanisms: prediction uses stored information to guide the interpretation of forthcoming sensory events, and attention prioritizes these events according to their behavioural relevance. Despite the ubiquitous contributions of attention and prediction to various aspects of perception and cognition, it remains unknown how they interact to modulate information processing in the brain. A recent extension of predictive coding theory suggests that attention optimises the expected precision of predictions by modulating the synaptic gain of prediction error units. Since prediction errors code for the difference between predictions and sensory signals, this model would suggest that attention increases the selectivity for mismatch information in the neural response to a surprising stimulus. Alternative predictive coding models proposes that attention increases the activity of prediction (or ‘representation’) neurons, and would therefore suggest that attention and prediction synergistically modulate selectivity for feature information in the brain. Here we applied multivariate forward encoding techniques to neural activity recorded via electroencephalography (EEG) as human observers performed a simple visual task, to test for the effect of attention on both mismatch and feature information in the neural response to surprising stimuli. Participants attended or ignored a periodic stream of gratings, the orientations of which could be either predictable, surprising, or unpredictable. We found that surprising stimuli evoked neural responses that were encoded according to the difference between predicted and observed stimulus features, and that attention facilitated the encoding of this type of information in the brain. These findings advance our understanding of how attention and prediction modulate information processing in the brain, and support the theory that attention optimises precision expectations during hierarchical inference by increasing the gain of prediction errors.


2021 ◽  
Author(s):  
Sebastian O Andersson ◽  
Edvard I Moser ◽  
May-Britt Moser

Object-vector (OV) cells are cells in the medial entorhinal cortex (MEC) that track an animal's distance and direction to objects in the environment. Their firing fields are defined by vectorial relationships to free-standing 3-dimensional (3D) objects of a variety of identities and shapes. However, the natural world contains a panorama of objects, ranging from discrete 3D items to flat two-dimensional (2D) surfaces, and it remains unclear what are the most fundamental features of objects that drive vectorial responses. Here we address this question by systematically changing features of experimental objects. Using an algorithm that robustly identifies OV firing fields, we show that the cells respond to a variety of 2D surfaces, with visual contrast as the most basic visual feature to elicit neural responses. The findings suggest that OV cells use plain visual features as vectorial anchoring points, allowing vector-guided navigation to proceed in environments with few free-standing landmarks.


2022 ◽  
Author(s):  
Nina M Hanning ◽  
Heiner Deubel

Already before the onset of a saccadic eye movement, we preferentially process visual information at the upcoming eye fixation. This 'presaccadic shift of attention' is typically assessed via localized test items, which potentially bias the attention measurement. Here we show how presaccadic attention shapes perception from saccade origin to target when no scene-structuring items are presented. Participants made saccades into a 1/f ('pink') noise field, in which we embedded a brief orientation signal at various locations shortly before saccade onset. Local orientation discrimination performance served as a proxy for the allocation of attention. Results demonstrate that (1) saccades are preceded by shifts of attention to their goal location even if they are directed into an unstructured visual field, but the spread of attention, compared to target-directed saccades, is broad; (2) the presaccadic attention shift is accompanied by considerable attentional costs at the presaccadic eye fixation; (3) objects markedly shape the distribution of presaccadic attention, demonstrating the relevance of an item-free approach for measuring attentional dynamics across the visual field.


2018 ◽  
Author(s):  
Angus F. Chapman ◽  
Viola S. Störmer

Theories of visual attention differ in what they define as the core unit of selection. Feature-based theories emphasize the importance of visual features (e.g., color, size, motion), demonstrated through enhancement of attended features across the visual field, while object-based theories propose that attention enhances all features belonging to the same object. Here we test how within-object enhancement of features interacts with spatially global effects of feature-based attention. Participants attended a set of colored dots (moving coherently upwards or downwards) to detect brief luminance decreases, while simultaneously detecting speed changes in another set of dots in the opposite visual field. Participants had higher speed detection rates for the dot array that matched the motion direction of the attended color array, although motion direction was entirely task-irrelevant. This effect persisted even when it was detrimental for task performance. Overall, these results indicate that task-irrelevant object features are enhanced globally, surpassing object boundaries.


Author(s):  
Kassandra R. Lee ◽  
Elizabeth Groesbeck ◽  
O. Scott Gwinn ◽  
Michael A. Webster ◽  
Fang Jiang

Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with American sign language and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, the authors examined whether deaf individuals’ enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Their results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.


2011 ◽  
Vol 11 (11) ◽  
pp. 523-523
Author(s):  
F. H. Hamker ◽  
A. Ziesche

Sign in / Sign up

Export Citation Format

Share Document