scholarly journals How attention extracts objects from noise

2013 ◽  
Vol 110 (6) ◽  
pp. 1346-1356 ◽  
Author(s):  
Michael S. Pratte ◽  
Sam Ling ◽  
Jascha D. Swisher ◽  
Frank Tong

The visual system is remarkably proficient at extracting relevant object information from noisy, cluttered environments. Although attention is known to enhance sensory processing, the mechanisms by which attention extracts relevant information from noise are not well understood. According to the perceptual template model, attention may act to amplify responses to all visual input, or it may act as a noise filter, dampening responses to irrelevant visual noise. Amplification allows for improved performance in the absence of visual noise, whereas a noise-filtering mechanism can only improve performance if the target stimulus appears in noise. Here, we used fMRI to investigate how attention modulates cortical responses to objects at multiple levels of the visual pathway. Participants viewed images of faces, houses, chairs, and shoes, presented in various levels of visual noise. We used multivoxel pattern analysis to predict the viewed object category, for attended and unattended stimuli, from cortical activity patterns in individual visual areas. Early visual areas, V1 and V2, exhibited a benefit of attention only at high levels of visual noise, suggesting that attention operates via a noise-filtering mechanism at these early sites. By contrast, attention led to enhanced processing of noise-free images (i.e., amplification) only in higher visual areas, including area V4, fusiform face area, mid-Fusiform area, and the lateral occipital cortex. Together, these results suggest that attention improves people's ability to discriminate objects by de-noising visual input in early visual areas and amplifying this noise-reduced signal at higher stages of visual processing.

Perception ◽  
1998 ◽  
Vol 27 (8) ◽  
pp. 889-935 ◽  
Author(s):  
Peter Lennie

The visual system has a parallel and hierarchical organization, evident at every stage from the retina onwards. Although the general benefits of parallel and hierarchical organization in the visual system are easily understood, it has not been easy to discern the function of the visual cortical modules. I explore the view that striate cortex segregates information about different attributes of the image, and dispatches it for analysis to different extrastriate areas. I argue that visual cortex does not undertake multiple relatively independent analyses of the image from which it assembles a unified representation that can be interrogated about the what and where of the world. Instead, occipital cortex is organized so that perceptually relevant information can be recovered at every level in the hierarchy, that information used in making decisions at one level is not passed on to the next level, and, with one rather special exception (area MT), through all stages of analysis all dimensions of the image remain intimately coupled in a retinotopic map. I then offer some explicit suggestions about the analyses undertaken by visual areas in occipital cortex, and conclude by examining some objections to the proposals.


2016 ◽  
Author(s):  
Yuanning Li ◽  
R. Mark Richardson ◽  
Avniel Singh Ghuman

AbstractThe lack of multivariate methods for decoding the representational content of interregional neural communication has left it difficult to know what information is represented in distributed brain circuit interactions. Here we present Multi-Connection Pattern Analysis (MCPA), which works by learning mappings between the activity patterns of the populations as a factor of the information being processed. These maps are used to predict the activity from one neural population based on the activity from the other population. Successful MCPA-based decoding indicates the involvement of distributed computational processing and provides a framework for probing the representational structure of the interaction. Simulations demonstrate the efficacy of MCPA in realistic circumstances. Applying MCPA to fMRI data shows that interactions between visual cortex regions are sensitive to information that distinguishes individual natural images, suggesting that image individuation occurs through interactive computation across the visual processing network. MCPA-based representational similarity analyses (RSA) results support models of error coding in interactions among regions of the network. Further RSA analyses relate the non-linear information transformation operations between layers of a computational model (HMAX) of visual processing to the information transformation between regions of the visual processing network. Additionally, applying MCPA to human intracranial electrophysiological data demonstrates that the interaction between occipital face area and fusiform face area contains information about individual faces. Thus, MCPA can be used to assess the information represented in the coupled activity of interacting neural circuits and probe the underlying principles of information transformation between regions.


2021 ◽  
Author(s):  
Xiang Wang ◽  
Anirvan S. Nandy ◽  
Monika P. Jadi

ABSTRACTContrast is a key feature of the visual scene that aids object recognition. Attention has been shown to selectively enhance the responses to low contrast stimuli in visual area V4, a critical hub that sends projections both up and down the visual hierarchy. Veridical encoding of contrast information is a key computation in early visual areas, while later stages encode higher level features that benefit from improved sensitivity to low contrast. How area V4 meets these distinct information processing demands in the attentive state is not known. We found that attentional modulation of contrast responses in area V4 is cortical layer and cell-class specific. Putative excitatory neurons in the superficial output layers that project to higher areas show enhanced boosting of low contrast information. On the other hand, putative excitatory neurons of deep output layers that project to early visual areas exhibit contrast-independent scaling. Computational modeling revealed that such layer-wise differences may result from variations in spatial integration extent of inhibitory neurons. These findings reveal that the nature of interactions between attention and contrast in V4 is highly compartmentalized, in alignment with the demands of the visual processing hierarchy.


2019 ◽  
Author(s):  
Sirui Liu ◽  
Qing Yu ◽  
Peter U. Tse ◽  
Patrick Cavanagh

SummaryWhen perception differs from the physical stimulus, as it does for visual illusions and binocular rivalry, the opportunity arises to localize where perception emerges in the visual processing hierarchy. Representations prior to that stage differ from the eventual conscious percept even though they provide input to it. Here we investigate where and how a remarkable misperception of position emerges in the brain. This “double-drift” illusion causes a dramatic mismatch between retinal and perceived location, producing a perceived path that can differ from its physical path by 45° or more [1]. The deviations in the perceived trajectory can accumulate over at least a second [1] whereas other motion-induced position shifts accumulate over only 80 to 100 ms before saturating [2]. Using fMRI and multivariate pattern analysis, we find that the illusory path does not share activity patterns with a matched physical path in any early visual areas. In contrast, a whole-brain searchlight analysis reveals a shared representation in more anterior regions of the brain. These higher-order areas would have the longer time constants required to accumulate the small moment-to-moment position offsets that presumably originate in early visual cortices, and then transform these sensory inputs into a final conscious percept. The dissociation between perception and the activity in early sensory cortex suggests that perceived position does not emerge in what is traditionally regarded as the visual system but emerges instead at a much higher level.


2017 ◽  
Author(s):  
Bingbing Guo ◽  
Zhengang Lu ◽  
Jessica E. Goold ◽  
Huan Luo ◽  
Ming Meng

ABSTRACTThe brain dynamically creates predictions about upcoming stimuli to guide perception efficiently. Recent behavioral results suggest theta-band oscillations contribute to this prediction process, however litter is known about the underlying neural mechanism. Here, we combine fMRI and a time-resolved psychophysical paradigm to access fine temporal-scale profiles of the fluctuations of brain activation patterns corresponding to visual object priming. Specifically, multi-voxel activity patterns in the fusiform face area (FFA) and the parahippocampal place area (PPA) show temporal fluctuations at a theta-band (~5 Hz) rhythm. Importantly, the theta-band power in the FFA negatively correlates with reaction time, further indicating the critical role of the observed cortical theta oscillations. Moreover, alpha-band (~10 Hz) shows a dissociated spatial distribution, mainly linked to the occipital cortex. These findings, to our knowledge, are the first fMRI study that indicates temporal fluctuations of multi-voxel activity patterns and that demonstrates theta and alpha rhythms in relevant brain areas.


2021 ◽  
Vol 15 ◽  
Author(s):  
Trung Quang Pham ◽  
Shota Nishiyama ◽  
Norihiro Sadato ◽  
Junichi Chikazoe

Multivoxel pattern analysis (MVPA) has become a standard tool for decoding mental states from brain activity patterns. Recent studies have demonstrated that MVPA can be applied to decode activity patterns of a certain region from those of the other regions. By applying a similar region-to-region decoding technique, we examined whether the information represented in the visual areas can be explained by those represented in the other visual areas. We first predicted the brain activity patterns of an area on the visual pathway from the others, then subtracted the predicted patterns from their originals. Subsequently, the visual features were derived from these residuals. During the visual perception task, the elimination of the top-down signals enhanced the simple visual features represented in the early visual cortices. By contrast, the elimination of the bottom-up signals enhanced the complex visual features represented in the higher visual cortices. The directions of such modulation effects varied across visual perception/imagery tasks, indicating that the information flow across the visual cortices is dynamically altered, reflecting the contents of visual processing. These results demonstrated that the distillation approach is a useful tool to estimate the hidden content of information conveyed across brain regions.


2011 ◽  
Vol 23 (1) ◽  
pp. 119-136 ◽  
Author(s):  
Jason Fischer ◽  
Nicole Spotswood ◽  
David Whitney

Representing object position is one of the most critical functions of the visual system, but this task is not as simple as reading off an object's retinal coordinates. A rich body of literature has demonstrated that the position in which we perceive an object depends not only on retinotopy but also on factors such as attention, eye movements, object and scene motion, and frames of reference, to name a few. Despite the distinction between perceived and retinal position, strikingly little is known about how or where perceived position is represented in the brain. In the present study, we dissociated retinal and perceived object position to test the relative precision of retina-centered versus percept-centered position coding in a number of independently defined visual areas. In an fMRI experiment, subjects performed a five-alternative forced-choice position discrimination task; our analysis focused on the trials in which subjects misperceived the positions of the stimuli. Using a multivariate pattern analysis to track the coupling of the BOLD response with incremental changes in physical and perceived position, we found that activity in higher level areas—middle temporal complex, fusiform face area, parahippocampal place area, lateral occipital cortex, and posterior fusiform gyrus—more precisely reflected the reported positions than the physical positions of the stimuli. In early visual areas, this preferential coding of perceived position was absent or reversed. Our results demonstrate a new kind of spatial topography present in higher level visual areas in which an object's position is encoded according to its perceived rather than retinal location. We term such percept-centered encoding “perceptotopy”.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Katrina R. Quinn ◽  
Lenka Seillier ◽  
Daniel A. Butts ◽  
Hendrikje Nienborg

AbstractFeedback in the brain is thought to convey contextual information that underlies our flexibility to perform different tasks. Empirical and computational work on the visual system suggests this is achieved by targeting task-relevant neuronal subpopulations. We combine two tasks, each resulting in selective modulation by feedback, to test whether the feedback reflected the combination of both selectivities. We used visual feature-discrimination specified at one of two possible locations and uncoupled the decision formation from motor plans to report it, while recording in macaque mid-level visual areas. Here we show that although the behavior is spatially selective, using only task-relevant information, modulation by decision-related feedback is spatially unselective. Population responses reveal similar stimulus-choice alignments irrespective of stimulus relevance. The results suggest a common mechanism across tasks, independent of the spatial selectivity these tasks demand. This may reflect biological constraints and facilitate generalization across tasks. Our findings also support a previously hypothesized link between feature-based attention and decision-related activity.


2008 ◽  
Vol 31 (2) ◽  
pp. 210-212 ◽  
Author(s):  
J. Patrick Mayo ◽  
Marc A. Sommer

AbstractSaccades divide visual input into rapid, discontinuous periods of stimulation on the retina. The response of single neurons to such sequential stimuli is neuronal adaptation; a robust first response followed by an interval-dependent diminished second response. Adaptation is pervasive in both early and late stages of visual processing. Given its inherent coding of brief time intervals, neuronal adaptation may play a fundamental role in compensating for visual delays.


2012 ◽  
Vol 24 (2) ◽  
pp. 521-529 ◽  
Author(s):  
Frank Oppermann ◽  
Uwe Hassler ◽  
Jörg D. Jescheniak ◽  
Thomas Gruber

The human cognitive system is highly efficient in extracting information from our visual environment. This efficiency is based on acquired knowledge that guides our attention toward relevant events and promotes the recognition of individual objects as they appear in visual scenes. The experience-based representation of such knowledge contains not only information about the individual objects but also about relations between them, such as the typical context in which individual objects co-occur. The present EEG study aimed at exploring the availability of such relational knowledge in the time course of visual scene processing, using oscillatory evoked gamma-band responses as a neural correlate for a currently activated cortical stimulus representation. Participants decided whether two simultaneously presented objects were conceptually coherent (e.g., mouse–cheese) or not (e.g., crown–mushroom). We obtained increased evoked gamma-band responses for coherent scenes compared with incoherent scenes beginning as early as 70 msec after stimulus onset within a distributed cortical network, including the right temporal, the right frontal, and the bilateral occipital cortex. This finding provides empirical evidence for the functional importance of evoked oscillatory activity in high-level vision beyond the visual cortex and, thus, gives new insights into the functional relevance of neuronal interactions. It also indicates the very early availability of experience-based knowledge that might be regarded as a fundamental mechanism for the rapid extraction of the gist of a scene.


Sign in / Sign up

Export Citation Format

Share Document