scholarly journals Top-down attention switches coupling between low-level and high-level areas of human visual cortex

2012 ◽  
Vol 109 (36) ◽  
pp. 14675-14680 ◽  
Author(s):  
N. Al-Aidroos ◽  
C. P. Said ◽  
N. B. Turk-Browne
2014 ◽  
Vol 98 (2) ◽  
pp. 87-91
Author(s):  
Yasuhiro Kawashima ◽  
Hiroyuki Yamashiro ◽  
Hiroki Yamamoto ◽  
Tomokazu Murase ◽  
Yoshikatsu Ichimura ◽  
...  

2010 ◽  
Vol 22 (6) ◽  
pp. 1235-1243 ◽  
Author(s):  
Marieke L. Schölvinck ◽  
Geraint Rees

Motion-induced blindness (MIB) is a visual phenomenon in which highly salient visual targets spontaneously disappear from visual awareness (and subsequently reappear) when superimposed on a moving background of distracters. Such fluctuations in awareness of the targets, although they remain physically present, provide an ideal paradigm to study the neural correlates of visual awareness. Existing behavioral data on MIB are consistent both with a role for structures early in visual processing and with involvement of high-level visual processes. To further investigate this issue, we used high field functional MRI to investigate signals in human low-level visual cortex and motion-sensitive area V5/MT while participants reported disappearance and reappearance of an MIB target. Surprisingly, perceptual invisibility of the target was coupled to an increase in activity in low-level visual cortex plus area V5/MT compared with when the target was visible. This increase was largest in retinotopic regions representing the target location. One possibility is that our findings result from an active process of completion of the field of distracters that acts locally in the visual cortex, coupled to a more global process that facilitates invisibility in general visual cortex. Our findings show that the earliest anatomical stages of human visual cortical processing are implicated in MIB, as with other forms of bistable perception.


2008 ◽  
Vol 28 (40) ◽  
pp. 10056-10061 ◽  
Author(s):  
S. L. Bressler ◽  
W. Tang ◽  
C. M. Sylvester ◽  
G. L. Shulman ◽  
M. Corbetta

2021 ◽  
Vol 14 ◽  
Author(s):  
Huijun Pan ◽  
Shen Zhang ◽  
Deng Pan ◽  
Zheng Ye ◽  
Hao Yu ◽  
...  

Previous studies indicate that top-down influence plays a critical role in visual information processing and perceptual detection. However, the substrate that carries top-down influence remains poorly understood. Using a combined technique of retrograde neuronal tracing and immunofluorescent double labeling, we characterized the distribution and cell type of feedback neurons in cat’s high-level visual cortical areas that send direct connections to the primary visual cortex (V1: area 17). Our results showed: (1) the high-level visual cortex of area 21a at the ventral stream and PMLS area at the dorsal stream have a similar proportion of feedback neurons back projecting to the V1 area, (2) the distribution of feedback neurons in the higher-order visual area 21a and PMLS was significantly denser than in the intermediate visual cortex of area 19 and 18, (3) feedback neurons in all observed high-level visual cortex were found in layer II–III, IV, V, and VI, with a higher proportion in layer II–III, V, and VI than in layer IV, and (4) most feedback neurons were CaMKII-positive excitatory neurons, and few of them were identified as inhibitory GABAergic neurons. These results may argue against the segregation of ventral and dorsal streams during visual information processing, and support “reverse hierarchy theory” or interactive model proposing that recurrent connections between V1 and higher-order visual areas constitute the functional circuits that mediate visual perception. Also, the corticocortical feedback neurons from high-level visual cortical areas to the V1 area are mostly excitatory in nature.


2009 ◽  
Vol 19 (21) ◽  
pp. 1799-1805 ◽  
Author(s):  
Vincenzo Romei ◽  
Micah M. Murray ◽  
Céline Cappe ◽  
Gregor Thut

Author(s):  
Le Dong ◽  
Ebroul Izquierdo ◽  
Shuzhi Ge

In this chapter, research on visual information classification based on biologically inspired visually selective attention with knowledge structuring is presented. The research objective is to develop visual models and corresponding algorithms to automatically extract features from selective essential areas of natural images, and finally, to achieve knowledge structuring and classification within a structural description scheme. The proposed scheme consists of three main aspects: biologically inspired visually selective attention, knowledge structuring and classification of visual information. Biologically inspired visually selective attention closely follow the mechanisms of the visual “what” and “where” pathways in the human brain. The proposed visually selective attention model uses a bottom-up approach to generate essential areas based on low-level features extracted from natural images. This model also exploits a low-level top-down selective attention mechanism which performs decisions on interesting objects by human interaction with preference or refusal inclination. Knowledge structuring automatically creates a relevance map from essential areas generated by visually selective attention. The developed algorithms derive a set of well-structured representations from low-level description to drive the final classification. The knowledge structuring relays on human knowledge to produce suitable links between low-level descriptions and high-level representation on a limited training set. The backbone is a distribution mapping strategy involving two novel modules: structured low-level feature extraction using convolution neural network and topology preservation based on sparse representation and unsupervised learning algorithm. Classification is achieved by simulating high-level top-down visual information perception and classification using an incremental Bayesian parameter estimation method. The utility of the proposed scheme for solving relevant research problems is validated. The proposed modular architecture offers straightforward expansion to include user relevance feedback, contextual input, and multimodal information if available.


Sign in / Sign up

Export Citation Format

Share Document