Selective effects of aging on simple and complex cells in primary visual cortex of rhesus monkeys

2012 ◽  
Vol 1470 ◽  
pp. 17-23 ◽  
Author(s):  
Zhen Liang ◽  
Hongxin Li ◽  
Yun Yang ◽  
Guangxing Li ◽  
Yong Tang ◽  
...  
2012 ◽  
Vol 24 (10) ◽  
pp. 2700-2725 ◽  
Author(s):  
Takuma Tanaka ◽  
Toshio Aoyagi ◽  
Takeshi Kaneko

We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.


2012 ◽  
Vol 24 (5) ◽  
pp. 1271-1296 ◽  
Author(s):  
Michael Teichmann ◽  
Jan Wiltschut ◽  
Fred Hamker

The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.


2020 ◽  
Author(s):  
Ali Almasi ◽  
Hamish Meffin ◽  
Shaun L. Cloherty ◽  
Yan Wong ◽  
Molis Yunzab ◽  
...  

AbstractVisual object identification requires both selectivity for specific visual features that are important to the object’s identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognised as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we show that the balance between selectivity and invariance in complex cells is more diverse than thought. Phase invariance is frequently partial, thus retaining sensitivity to brightness polarity, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.


2020 ◽  
Vol 30 (9) ◽  
pp. 5067-5087
Author(s):  
Ali Almasi ◽  
Hamish Meffin ◽  
Shaun L Cloherty ◽  
Yan Wong ◽  
Molis Yunzab ◽  
...  

Abstract Visual object identification requires both selectivity for specific visual features that are important to the object’s identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognized as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we quantitatively describe the balance between selectivity and invariance in complex cells. Phase invariance is frequently partial, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.


Sign in / Sign up

Export Citation Format

Share Document