scholarly journals Decreased temporal processing capacity for objects as a function of ascending the ventral visual pathway

2004 ◽  
Vol 4 (8) ◽  
pp. 514-514
Author(s):  
T. J. McKeeff ◽  
D. A. Remus ◽  
F. Tong
2007 ◽  
Vol 98 (1) ◽  
pp. 382-393 ◽  
Author(s):  
Thomas J. McKeeff ◽  
David A. Remus ◽  
Frank Tong

Behavioral studies have shown that object recognition becomes severely impaired at fast presentation rates, indicating a limitation in temporal processing capacity. Here, we studied whether this behavioral limit in object recognition reflects limitations in the temporal processing capacity of early visual areas tuned to basic features or high-level areas tuned to complex objects. We used functional MRI (fMRI) to measure the temporal processing capacity of multiple areas along the ventral visual pathway progressing from the primary visual cortex (V1) to high-level object-selective regions, specifically the fusiform face area (FFA) and parahippocampal place area (PPA). Subjects viewed successive images of faces or houses at presentation rates varying from 2.3 to 37.5 items/s while performing an object discrimination task. Measures of the temporal frequency response profile of each visual area revealed a systematic decline in peak tuning across the visual hierarchy. Areas V1–V3 showed peak activity at rapid presentation rates of 18–25 items/s, area V4v peaked at intermediate rates (9 items/s), and the FFA and PPA peaked at the slowest temporal rates (4–5 items/s). Our results reveal a progressive loss in the temporal processing capacity of the human visual system as information is transferred from early visual areas to higher areas. These data suggest that temporal limitations in object recognition likely result from the limited processing capacity of high-level object-selective areas rather than that of earlier stages of visual processing.


Author(s):  
Shijia Fan ◽  
Xiaosha Wang ◽  
Xiaoying Wang ◽  
Tao Wei ◽  
Yanchao Bi

2015 ◽  
Vol 35 (36) ◽  
pp. 12412-12424 ◽  
Author(s):  
A. Stigliani ◽  
K. S. Weiner ◽  
K. Grill-Spector

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yunjun Nam ◽  
Takayuki Sato ◽  
Go Uchida ◽  
Ekaterina Malakhova ◽  
Shimon Ullman ◽  
...  

AbstractHumans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).


2019 ◽  
Vol 31 (6) ◽  
pp. 821-836 ◽  
Author(s):  
Elliot Collins ◽  
Erez Freud ◽  
Jana M. Kainerstorfer ◽  
Jiaming Cao ◽  
Marlene Behrmann

Although shape perception is primarily considered a function of the ventral visual pathway, previous research has shown that both dorsal and ventral pathways represent shape information. Here, we examine whether the shape-selective electrophysiological signals observed in dorsal cortex are a product of the connectivity to ventral cortex or are independently computed. We conducted multiple EEG studies in which we manipulated the input parameters of the stimuli so as to bias processing to either the dorsal or ventral visual pathway. Participants viewed displays of common objects with shape information parametrically degraded across five levels. We measured shape sensitivity by regressing the amplitude of the evoked signal against the degree of stimulus scrambling. Experiment 1, which included grayscale versions of the stimuli, served as a benchmark establishing the temporal pattern of shape processing during typical object perception. These stimuli evoked broad and sustained patterns of shape sensitivity beginning as early as 50 msec after stimulus onset. In Experiments 2 and 3, we calibrated the stimuli such that visual information was delivered primarily through parvocellular inputs, which mainly project to the ventral pathway, or through koniocellular inputs, which mainly project to the dorsal pathway. In the second and third experiments, shape sensitivity was observed, but in distinct spatio-temporal configurations from each other and from that elicited by grayscale inputs. Of particular interest, in the koniocellular condition, shape selectivity emerged earlier than in the parvocellular condition. These findings support the conclusion of distinct dorsal pathway computations of object shape, independent from the ventral pathway.


2021 ◽  
Vol 118 (46) ◽  
pp. e2104779118
Author(s):  
T. Hannagan ◽  
A. Agrawal ◽  
L. Cohen ◽  
S. Dehaene

The visual word form area (VWFA) is a region of human inferotemporal cortex that emerges at a fixed location in the occipitotemporal cortex during reading acquisition and systematically responds to written words in literate individuals. According to the neuronal recycling hypothesis, this region arises through the repurposing, for letter recognition, of a subpart of the ventral visual pathway initially involved in face and object recognition. Furthermore, according to the biased connectivity hypothesis, its reproducible localization is due to preexisting connections from this subregion to areas involved in spoken-language processing. Here, we evaluate those hypotheses in an explicit computational model. We trained a deep convolutional neural network of the ventral visual pathway, first to categorize pictures and then to recognize written words invariantly for case, font, and size. We show that the model can account for many properties of the VWFA, particularly when a subset of units possesses a biased connectivity to word output units. The network develops a sparse, invariant representation of written words, based on a restricted set of reading-selective units. Their activation mimics several properties of the VWFA, and their lesioning causes a reading-specific deficit. The model predicts that, in literate brains, written words are encoded by a compositional neural code with neurons tuned either to individual letters and their ordinal position relative to word start or word ending or to pairs of letters (bigrams).


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


Sign in / Sign up

Export Citation Format

Share Document