scholarly journals A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields

2011 ◽  
Vol 7 (10) ◽  
pp. e1002250 ◽  
Author(s):  
Joel Zylberberg ◽  
Jason Timothy Murphy ◽  
Michael Robert DeWeese
2016 ◽  
Author(s):  
Damien Drix

Sparse coding is an effective operating principle for the brain, one that can guide the discovery of features and support the learning of assocations. Here we show how spiking neurons with discrete dendrites can learn sparse codes via an online, nonlinear Hebbian rule based on the concept of somato-dendritic mismatch. The rule gives lateral inhibition direct control over the selectivity of dendritic receptive fields, without the need for a sliding threshold. The network discovers independent components that are similar to the features learned by a sparse autoencoder. This improves the linear decodability of the input: combined with a linear readout, our single-layer network performs as well as a deeper multi-layer Perceptron on the MNIST dataset. It can also produce topographic feature maps when the lateral connections are organised in a center-surround pattern, although this does not improve the quality of the encoding.


2016 ◽  
Author(s):  
Damien Drix

Sparse coding is an effective operating principle for the brain, one that can guide the discovery of features and support the learning of assocations. Here we show how spiking neurons with discrete dendrites can learn sparse codes via an online, nonlinear Hebbian rule based on the concept of somato-dendritic mismatch. The rule gives lateral inhibition direct control over the selectivity of dendritic receptive fields, without the need for a sliding threshold. The network discovers independent components that are similar to the features learned by a sparse autoencoder. This improves the linear decodability of the input: combined with a linear readout, our single-layer network performs as well as a deeper multi-layer Perceptron on the MNIST dataset. It can also produce topographic feature maps when the lateral connections are organised in a center-surround pattern, although this does not improve the quality of the encoding.


2000 ◽  
Vol 12 (7) ◽  
pp. 1705-1720 ◽  
Author(s):  
Aapo Hyvärinen ◽  
Patrik Hoyer

Olshausen and Field (1996) applied the principle of independence maximization by sparse coding to extract features from natural images. This leads to the emergence of oriented linear filters that have simultaneous localization in space and in frequency, thus resembling Gabor functions and simple cell receptive fields. In this article, we show that the same principle of independence maximization can explain the emergence of phase- and shift-invariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces (instead of the independence of simple linear filter outputs). The norms of the projections on such “independent feature subspaces” then indicate the values of invariant features.


2001 ◽  
Vol 13 (5) ◽  
pp. 1023-1043
Author(s):  
Chris J. S. Webber

This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.


2000 ◽  
Vol 12 (3) ◽  
pp. 565-596 ◽  
Author(s):  
Chris J. S. Webber

Symmetry networks use permutation symmetries among synaptic weights to achieve transformation-invariant response. This article proposes a generic mechanism by which such symmetries can develop during unsupervised adaptation: it is shown analytically that spontaneous symmetry breaking can result in the discovery of unknown invariances of the data's probability distribution. It is proposed that a role of sparse coding is to facilitate the discovery of statistical invariances by this mechanism. It is demonstrated that the statistical dependences that exist between simple-cell-like threshold feature detectors, when exposed to temporally uncorrelated natural image data, can drive the development of complex-cell-like invariances, via single-cell Hebbian adaptation. A single learning rule can generate both simple-cell-like and complex-cell-like receptive fields.


1997 ◽  
Vol 9 (5) ◽  
pp. 959-970 ◽  
Author(s):  
Christian Piepenbrock ◽  
Helge Ritter ◽  
Klaus Obermayer

Correlation-based learning (CBL) has been suggested as the mechanism that underlies the development of simple-cell receptive fields in the primary visual cortex of cats, including orientation preference (OR) and ocular dominance (OD) (Linsker, 1986; Miller, Keller, & Stryker, 1989). CBL has been applied successfully to the development of OR and OD individually (Miller, Keller, & Stryker, 1989; Miller, 1994; Miyashita & Tanaka, 1991; Erwin, Obermayer, & Schulten, 1995), but the conditions for their joint development have not been studied (but see Erwin & Miller, 1995, for independent work on the same question) in contrast to competitive Hebbian models (Obermayer, Blasdel, & Schulten, 1992). In this article, we provide insight into why this has been the case: OR and OD decouple in symmetric CBL models, and a joint development of OR and OD is possible only in a parameter regime that depends on nonlinear mechanisms.


2010 ◽  
Vol 3 (9) ◽  
pp. 22-22
Author(s):  
M. Brandon ◽  
C. H. Anderson ◽  
G. M. DeAngelis

Sign in / Sign up

Export Citation Format

Share Document