MAGNOCELLULAR PATHWAY FOR ROTATION INVARIANT NEOCOGNITRON

1993 ◽  
Vol 04 (01) ◽  
pp. 43-54 ◽  
Author(s):  
CHRISTOPHER HIAN-ANN TING

In the mammalian visual system, magnocellular pathway and parvocellular pathway cooperatively process visual information in parallel. The magnocellular pathway is more global and less particular about the details while the parvocellular pathway recognizes objects based on the local features. In many aspects, Neocognitron may be regarded as the artificial analogue of the parvocellular pathway. It is interesting then to model the magnocellular pathway. In order to achieve "rotation invariance" for Neocognitron, we propose a neural network model after the magnocellular pathway and expand its roles to include surmising the orientation of the input pattern prior to recognition. With the incorporation of the magnocellular pathway, a basic shift in the original paradigm has taken place. A pattern is now said to be recognized when and only when one of the winners of the magnocellular pathway is validified by the parvocellular pathway. We have implemented the magnocellular pathway coupled with Neocognitron parallel on transputers; our simulation programme is now able to recognize numerals in arbitrary orientation.

Author(s):  
Mark Edwards ◽  
Stephanie C. Goodhew ◽  
David R. Badcock

AbstractThe visual system uses parallel pathways to process information. However, an ongoing debate centers on the extent to which the pathways from the retina, via the Lateral Geniculate nucleus to the visual cortex, process distinct aspects of the visual scene and, if they do, can stimuli in the laboratory be used to selectively drive them. These questions are important for a number of reasons, including that some pathologies are thought to be associated with impaired functioning of one of these pathways and certain cognitive functions have been preferentially linked to specific pathways. Here we examine the two main pathways that have been the focus of this debate: the magnocellular and parvocellular pathways. Specifically, we review the results of electrophysiological and lesion studies that have investigated their properties and conclude that while there is substantial overlap in the type of information that they process, it is possible to identify aspects of visual information that are predominantly processed by either the magnocellular or parvocellular pathway. We then discuss the types of visual stimuli that can be used to preferentially drive these pathways.


Author(s):  
Luis F. de Mingo ◽  
Nuria Gómez ◽  
Fernando Arroyo ◽  
Juan Castellanos

This article presents a neural network model that permits to build a conceptual hierarchy to approximate functions over a given interval. Bio-inspired axo-axonic connections are used. In these connections the signal weight between two neurons is computed by the output of other neuron. Such arquitecture can generate polynomial expressions with lineal activation functions. This network can approximate any pattern set with a polynomial equation. This neural system classifies an input pattern as an element belonging to a category that the system has, until an exhaustive classification is obtained. The proposed model is not a hierarchy of neural networks, it establishes relationships among all the different neural networks in order to propagate the activation. Each neural network is in charge of the input pattern recognition to any prototyped category, and also in charge of transmitting the activation to other neural networks to be able to continue with the approximation.


1996 ◽  
Vol 9 (8) ◽  
pp. 1417-1427 ◽  
Author(s):  
MASAYUKI KIKUCHI ◽  
KUNIHIKO FUKUSHIMA

2016 ◽  
Vol 27 (1) ◽  
pp. 29-51
Author(s):  
Juan M. Galeazzi ◽  
Joaquín Navajas ◽  
Bedeho M. W. Mender ◽  
Rodrigo Quian Quiroga ◽  
Loredana Minini ◽  
...  

Author(s):  
Shin Kobayashi ◽  
Shigeru Okabayashi ◽  
Isao Horiba ◽  
Noboru Sugie ◽  
Hiroaki Kudo ◽  
...  

2021 ◽  
Author(s):  
Jianghong Shi ◽  
Bryan Tripp ◽  
Eric Shea-Brown ◽  
Stefan Mihalas ◽  
Michael Buice

Convolutional neural networks trained on object recognition derive inspiration from the neural architecture of the visual system in primates, and have been used as models of the feedforward computation performed in the primate ventral stream. In contrast to the deep hierarchical organization of primates, the visual system of the mouse has a shallower arrangement. Since mice and primates are both capable of visually guided behavior, this raises questions about the role of architecture in neural computation. In this work, we introduce a novel framework for building a biologically constrained convolutional neural network model of the mouse visual cortex. The architecture and structural parameters of the network are derived from experimental measurements, specifically the 100-micrometer resolution interareal connectome, the estimates of numbers of neurons in each area and cortical layer, and the statistics of connections between cortical layers. This network is constructed to support detailed task-optimized models of mouse visual cortex, with neural populations that can be compared to specific corresponding populations in the mouse brain. Using a well-studied image classification task as our working example, we demonstrate the computational capability of this mouse-sized network. Given its relatively small size, MouseNet achieves roughly 2/3rds the performance level on ImageNet as VGG16. In combination with the large scale Allen Brain Observatory Visual Coding dataset, we use representational similarity analysis to quantify the extent to which MouseNet recapitulates the neural representation in mouse visual cortex. Importantly, we provide evidence that optimizing for task performance does not improve similarity to the corresponding biological system beyond a certain point. We demonstrate that the distributions of some physiological quantities are closer to the observed distributions in the mouse brain after task training. We encourage the use of the MouseNet architecture by making the code freely available.


Sign in / Sign up

Export Citation Format

Share Document