Spatial scene representations formed by self-organizing learning in a hippocampal extension of the ventral visual system

2008 ◽  
Vol 28 (10) ◽  
pp. 2116-2127 ◽  
Author(s):  
Edmund T. Rolls ◽  
James M. Tromans ◽  
Simon M. Stringer
1989 ◽  
Vol 60 (3) ◽  
Author(s):  
K. Nakano ◽  
M. Niizuma ◽  
T. Omori

NeuroImage ◽  
2010 ◽  
Vol 52 (4) ◽  
pp. 1541-1548 ◽  
Author(s):  
Katherine L. Roberts ◽  
Glyn W. Humphreys

1987 ◽  
Vol 55 (5) ◽  
pp. 333-343 ◽  
Author(s):  
H. Frohn ◽  
H. Geiger ◽  
W. Singer

2000 ◽  
Vol 10 (01) ◽  
pp. 59-70 ◽  
Author(s):  
JONATHAN A. MARSHALL ◽  
VISWANATH SRIKANTH

Existing neural network models are capable of tracking linear trajectories of moving visual objects. This paper describes an additional neural mechanism, disfacilitation, that enhances the ability of a visual system to track curved trajectories. The added mechanism combines information about an object's trajectory with information about changes in the object's trajectory, to improve the estimates for the object's next probable location. Computational simulations are presented that show how the neural mechanism can learn to track the speed of objects and how the network operates to predict the trajectories of accelerating and decelerating objects.


Robotica ◽  
1999 ◽  
Vol 17 (2) ◽  
pp. 219-227
Author(s):  
H. Zenkouar ◽  
A. Nachit

Image compression is essential for applications such as transmission of databases, etc. In this paper, we propose a new scheme for image compression combining recursive wavelet transforms with vector quantization. This method is based on the Kohonen Self-Organizing Maps (SOM) which take into account features of a visual system in both space and frequency domains.


2015 ◽  
Vol 141 ◽  
pp. 28-34 ◽  
Author(s):  
Ce Mo ◽  
Mengxia Yu ◽  
Carol Seger ◽  
Lei Mo

Author(s):  
Akihiro Eguchi ◽  
Bedeho M. W. Mender ◽  
Benjamin D. Evans ◽  
Glyn W. Humphreys ◽  
Simon M. Stringer

2020 ◽  
pp. 176-191
Author(s):  
Edmund T. Rolls

The dorsal visual system computes information about where objects are in space, and their motion, and this is used for actions performed in space. This requires coordinate transforms from retinal coordinates to head based coordinates, and then in parietal cortex areas to coordinates for reaching into space, and for allocentric, world-based, spatial coordinates. Recent approaches to how these transforms are performed, with analogies to transform invariance learning in the ventral visual system, are described.


Sign in / Sign up

Export Citation Format

Share Document