scholarly journals Receptive Field Formation in Natural Scene Environments: Comparison of Single-Cell Learning Rules

1998 ◽  
Vol 10 (7) ◽  
pp. 1797-1813 ◽  
Author(s):  
Brian S. Blais ◽  
N. Intrator ◽  
H. Shouval ◽  
Leon N. Cooper

We study several statistically and biologically motivated learning rules using the same visual environment: one made up of natural scenes and the same single-cell neuronal architecture. This allows us to concentrate on the feature extraction and neuronal coding properties of these rules. Included in these rules are kurtosis and skewness maximization, the quadratic form of the Bienenstock-Cooper-Munro (BCM) learning rule, and single-cell independent component analysis. Using a structure removal method, we demonstrate that receptive fields developed using these rules depend on a small portion of the distribution. We find that the quadratic form of the BCM rule behaves in a manner similar to a kurtosis maximization rule when the distribution contains kurtotic directions, although the BCM modification equations are computationally simpler.

2001 ◽  
Vol 13 (5) ◽  
pp. 1023-1043
Author(s):  
Chris J. S. Webber

This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.


2000 ◽  
Vol 12 (5) ◽  
pp. 1057-1066 ◽  
Author(s):  
Brian Blais ◽  
Leon N. Cooper ◽  
Harel Shouval

Most simple and complex cells in the cat striate cortex are both orientation and direction selective. In this article we use single-cell learning rules to develop both orientation and direction selectivity in a natural scene environment. We show that a simple principal component analysis rule is inadequate for developing direction selectivity, but that the BCM rule as well as similar higher-order rules can. We also demonstrate that the convergence of lagged and nonlagged cells depends on the velocity of motion in the environment, and that strobe rearing disrupts this convergence, resulting in a loss of direction selectivity.


2009 ◽  
Vol 26 (1) ◽  
pp. 35-49 ◽  
Author(s):  
THORSTEN HANSEN ◽  
KARL R. GEGENFURTNER

AbstractForm vision is traditionally regarded as processing primarily achromatic information. Previous investigations into the statistics of color and luminance in natural scenes have claimed that luminance and chromatic edges are not independent of each other and that any chromatic edge most likely occurs together with a luminance edge of similar strength. Here we computed the joint statistics of luminance and chromatic edges in over 700 calibrated color images from natural scenes. We found that isoluminant edges exist in natural scenes and were not rarer than pure luminance edges. Most edges combined luminance and chromatic information but to varying degrees such that luminance and chromatic edges were statistically independent of each other. Independence increased along successive stages of visual processing from cones via postreceptoral color-opponent channels to edges. The results show that chromatic edge contrast is an independent source of information that can be linearly combined with other cues for the proper segmentation of objects in natural and artificial vision systems. Color vision may have evolved in response to the natural scene statistics to gain access to this independent information.


1995 ◽  
Vol 7 (3) ◽  
pp. 507-517 ◽  
Author(s):  
Marco Idiart ◽  
Barry Berk ◽  
L. F. Abbott

Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 67-67 ◽  
Author(s):  
H Hill ◽  
R Watt

The first task of any face processing system is detection of the face. We studied how the human visual system achieves face detection using a 2AFC task in which subjects were required to detect a face in the image of a natural scene. Luminance noise was added to the stimuli and performance was measured as a function of orientation and orientation bandwidth of the noise. Sensitivity levels and the effects of orientation bandwidth were similar for horizontally and vertically oriented noise. Performance was reduced for the smallest orientation bandwidth (5.6°) noise but sensitivity did not decrease further with increasing bandwidth until a point between 45° and 90°. The results suggest that important information may be oriented close to the vertical and horizontal. To test whether the results were specific to the task of face detection the same noise was added to the images in a man-made natural decision task. Equivalent levels of noise were found to be more disruptive and the effect of orientation bandwidth was different. The results are discussed in terms of models of face processing making use of oriented filters (eg Watt and Dakin, 1993 Perception22 Supplement, 12) and local energy models of feature detection (Morrone and Burr, 1988 Proceedings of the Royal Society of London B235 221 – 245).


2004 ◽  
Vol 14 (01) ◽  
pp. 1-8 ◽  
Author(s):  
RALF MÖLLER

The paper reviews single-neuron learning rules for minor component analysis and suggests a novel minor component learning rule. In this rule, the weight vector length is self-stabilizing, i.e., moving towards unit length in each learning step. In simulations with low- and medium-dimensional data, the performance of the novel learning rule is compared with previously suggested rules.


2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Jean Duchesne ◽  
Vincent Bouvier ◽  
Julien Guillemé ◽  
Olivier A. Coubard

When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell’s law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes.


2016 ◽  
Vol 113 (5) ◽  
pp. 1441-1446 ◽  
Author(s):  
Andrei S. Kozlov ◽  
Timothy Q. Gentner

High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.


Sign in / Sign up

Export Citation Format

Share Document