scholarly journals Manifold-tiling Localized Receptive Fields are Optimal in Similarity-preserving Neural Networks

2018 ◽  
Author(s):  
Anirvan M. Sengupta ◽  
Mariano Tepper ◽  
Cengiz Pehlevan ◽  
Alexander Genkin ◽  
Dmitri B. Chklovskii

AbstractMany neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i.e., they respond to a small neighborhood of stimulus space. What is the functional significance of such representations and how can they arise? Here, we propose that localized receptive fields emerge in similarity-preserving networks of rectifying neurons that learn low-dimensional manifolds populated by sensory inputs. Numerical simulations of such networks on standard datasets yield manifold-tiling localized receptive fields. More generally, we show analytically that, for data lying on symmetric manifolds, optimal solutions of objectives, from which similarity-preserving networks are derived, have localized receptive fields. Therefore, nonnegative similarity-preserving mapping (NSM) implemented by neural networks can model representations of continuous manifolds in the brain.

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 35-35 ◽  
Author(s):  
M T Wallace

Multisensory integration in the superior colliculus (SC) of the cat requires a protracted postnatal developmental time course. Kittens 3 – 135 days postnatal (dpn) were examined and the first neuron capable of responding to two different sensory inputs (auditory and somatosensory) was not seen until 12 dpn. Visually responsive multisensory neurons were not encountered until 20 dpn. These early multisensory neurons responded weakly to sensory stimuli, had long response latencies, large receptive fields, and poorly developed response selectivities. Most striking, however, was their inability to integrate cross-modality cues in order to produce the significant response enhancement or depression characteristic of these neurons in adults. The incidence of multisensory neurons increased gradually over the next 10 – 12 weeks. During this period, sensory responses became more robust, latencies shortened, receptive fields decreased in size, and unimodal selectivities matured. The first neurons capable of cross-modality integration were seen at 28 dpn. For the following two months, the incidence of such integrative neurons rose gradually until adult-like values were achieved. Surprisingly, however, as soon as a multisensory neuron exhibited this capacity, most of its integrative features were indistinguishable from those in adults. Given what is known about the requirements for multisensory integration in adult animals, this observation suggests that the appearance of multisensory integration reflects the onset of functional corticotectal inputs.


2018 ◽  
Author(s):  
Xiaoyang Long ◽  
Sheng-Jia Zhang

AbstractSpatially selective firing in the forms of place cells, grid cells, boundary vector/border cells and head direction cells are the basic building blocks of a canonical spatial navigation system centered on the hippocampal-entorhinal complex. While head direction cells can be found throughout the brain, spatial tuning outside the hippocampal formation are often non-specific or conjunctive to other representations such as a reward. Although the precise mechanism of spatially selective activities is not understood, various studies show sensory inputs (particularly vision) heavily modulate spatial representation in the hippocampal-entorhinal circuit. To better understand the contribution from other sensory inputs in shaping spatial representation in the brain, we recorded from the primary somatosensory cortex in foraging rats. To our surprise, we were able to identify the full complement of spatial activity patterns reported in the hippocampal-entorhinal network, namely, place cells, head direction cells, boundary vector/border cells, grid cells and conjunctive cells. These newly identified somatosensory spatial cell types form a spatial map outside the hippocampal formation and support the hypothesis that location information is necessary for body representation in the somatosensory cortex, and may be analogous to spatially tuned representations in the motor cortex relating to the movement of body parts. Our findings are transformative in our understanding of how spatial information is used and utilized in the brain, as well as functional operations of the somatosensory cortex in the context of rehabilitation with brain-machine interfaces.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Yosef Singer ◽  
Yayoi Teramoto ◽  
Ben DB Willmore ◽  
Jan WH Schnupp ◽  
Andrew J King ◽  
...  

Neurons in sensory cortex are tuned to diverse features in natural scenes. But what determines which features neurons become selective to? Here we explore the idea that neuronal selectivity is optimized to represent features in the recent sensory past that best predict immediate future inputs. We tested this hypothesis using simple feedforward neural networks, which were trained to predict the next few moments of video or audio in clips of natural scenes. The networks developed receptive fields that closely matched those of real cortical neurons in different mammalian species, including the oriented spatial tuning of primary visual cortex, the frequency selectivity of primary auditory cortex and, most notably, their temporal tuning properties. Furthermore, the better a network predicted future inputs the more closely its receptive fields resembled those in the brain. This suggests that sensory processing is optimized to extract those features with the most capacity to predict future input.


Author(s):  
Xiaoyang Long ◽  
Sheng-Jia Zhang

AbstractSpatially selective firing of place cells, grid cells, boundary vector/border cells and head direction cells constitutes the basic building blocks of a canonical spatial navigation system centered on the hippocampal-entorhinal complex. While head direction cells can be found throughout the brain, spatial tuning outside the hippocampal formation is often non-specific or conjunctive to other representations such as a reward. Although the precise mechanism of spatially selective firing activity is not understood, various studies show sensory inputs, particularly vision, heavily modulate spatial representation in the hippocampal-entorhinal circuit. To better understand the contribution of other sensory inputs in shaping spatial representation in the brain, we performed recording from the primary somatosensory cortex in foraging rats. To our surprise, we were able to detect the full complement of spatially selective firing patterns similar to that reported in the hippocampal-entorhinal network, namely, place cells, head direction cells, boundary vector/border cells, grid cells and conjunctive cells, in the somatosensory cortex. These newly identified somatosensory spatial cells form a spatial map outside the hippocampal formation and support the hypothesis that location information modulates body representation in the somatosensory cortex. Our findings provide transformative insights into our understanding of how spatial information is processed and integrated in the brain, as well as functional operations of the somatosensory cortex in the context of rehabilitation with brain-machine interfaces.


2018 ◽  
Author(s):  
Jan Drugowitsch ◽  
André G. Mendonça ◽  
Zachary F. Mainen ◽  
Alexandre Pouget

AbstractDiffusion decision models (DDMs) are immensely successful models for decision-making under uncertainty and time pressure. In the context of perceptual decision making, these models typically start with two input units, organized in a neuron-antineuron pair. In contrast, in the brain, sensory inputs are encoded through the activity of large neuronal populations. Moreover, while DDMs are wired by hand, the nervous system must learn the weights of the network through trial and error. There is currently no normative theory of learning in DDMs and therefore no theory of how decision makers could learn to make optimal decisions in this context. Here, we derive the first such rule for learning a near-optimal linear combination of DDM inputs based on trial-by-trial feedback. The rule is Bayesian in the sense that it learns not only the mean of the weights but also the uncertainty around this mean in the form of a covariance matrix. In this rule, the rate of learning is proportional (resp. inversely proportional) to confidence for incorrect (resp. correct) decisions. Furthermore, we show that, in volatile environments, the rule predicts a bias towards repeating the same choice after correct decisions, with a bias strength that is modulated by the previous choice’s difficulty. Finally, we extend our learning rule to cases for which one of the choices is more likely a priori, which provides new insights into how such biases modulate the mechanisms leading to optimal decisions in diffusion models.Significance StatementPopular models for the tradeoff between speed and accuracy of everyday decisions usually assume fixed, low-dimensional sensory inputs. In contrast, in the brain, these inputs are distributed across larger populations of neurons, and their interpretation needs to be learned from feedback. We ask how such learning could occur and demonstrate that efficient learning is significantly modulated by decision confidence. This modulation predicts a particular dependency pattern between consecutive choices, and provides new insight into how a priori biases for particular choices modulate the mechanisms leading to efficient decisions in these models.


2020 ◽  
Vol 34 (04) ◽  
pp. 6380-6387
Author(s):  
Hanwei Wu ◽  
Markus Flierl

Autoencoders and their variations provide unsupervised models for learning low-dimensional representations for downstream tasks. Without proper regularization, autoencoder models are susceptible to the overfitting problem and the so-called posterior collapse phenomenon. In this paper, we introduce a quantization-based regularizer in the bottleneck stage of autoencoder models to learn meaningful latent representations. We combine both perspectives of Vector Quantized-Variational AutoEncoders (VQ-VAE) and classical denoising regularization methods of neural networks. We interpret quantizers as regularizers that constrain latent representations while fostering a similarity-preserving mapping at the encoder. Before quantization, we impose noise on the latent codes and use a Bayesian estimator to optimize the quantizer-based representation. The introduced bottleneck Bayesian estimator outputs the posterior mean of the centroids to the decoder, and thus, is performing soft quantization of the noisy latent codes. We show that our proposed regularization method results in improved latent representations for both supervised learning and clustering downstream tasks when compared to autoencoders using other bottleneck structures.


PLoS Biology ◽  
2021 ◽  
Vol 19 (9) ◽  
pp. e3001393
Author(s):  
Jai Y. Yu ◽  
Loren M. Frank

The receptive field of a neuron describes the regions of a stimulus space where the neuron is consistently active. Sparse spiking outside of the receptive field is often considered to be noise, rather than a reflection of information processing. Whether this characterization is accurate remains unclear. We therefore contrasted the sparse, temporally isolated spiking of hippocampal CA1 place cells to the consistent, temporally adjacent spiking seen within their spatial receptive fields (“place fields”). We found that isolated spikes, which occur during locomotion, are strongly phase coupled to hippocampal theta oscillations and transiently express coherent nonlocal spatial representations. Further, prefrontal cortical activity is coordinated with and can predict the occurrence of future isolated spiking events. Rather than local noise within the hippocampus, sparse, isolated place cell spiking reflects a coordinated cortical–hippocampal process consistent with the generation of nonlocal scenario representations during active navigation.


2019 ◽  
Author(s):  
Rishabh Raj ◽  
Dar Dahlen ◽  
Kyle Duyck ◽  
C. Ron Yu

AbstractThe brain has a remarkable ability to recognize objects from noisy or corrupted sensory inputs. How this cognitive robustness is achieved computationally remains unknown. We present a coding paradigm, which encodes structural dependence among features of the input and transforms various forms of the same input into the same representation. The paradigm, through dimensionally expanded representation and sparsity constraint, allows redundant feature coding to enhance robustness and is efficient in representing objects. We demonstrate consistent representations of visual and olfactory objects under conditions of occlusion, high noise or with corrupted coding units. Robust face recognition is achievable without deep layers or large training sets. The paradigm produces both complex and simple receptive fields depending on learning experience, thereby offers a unifying framework of sensory processing.One line abstractWe present a framework of efficient coding of objects as a combination of structurally dependent feature groups that is robust against noise and corruption.


Author(s):  
Caroline A. Miller ◽  
Laura L. Bruce

The first visual cortical axons arrive in the cat superior colliculus by the time of birth. Adultlike receptive fields develop slowly over several weeks following birth. The developing cortical axons go through a sequence of changes before acquiring their adultlike morphology and function. To determine how these axons interact with neurons in the colliculus, cortico-collicular axons were labeled with biocytin (an anterograde neuronal tracer) and studied with electron microscopy.Deeply anesthetized animals received 200-500 nl injections of biocytin (Sigma; 5% in phosphate buffer) in the lateral suprasylvian visual cortical area. After a 24 hr survival time, the animals were deeply anesthetized and perfused with 0.9% phosphate buffered saline followed by fixation with a solution of 1.25% glutaraldehyde and 1.0% paraformaldehyde in 0.1M phosphate buffer. The brain was sectioned transversely on a vibratome at 50 μm. The tissue was processed immediately to visualize the biocytin.


2019 ◽  
Vol 41 (13) ◽  
pp. 3612-3625 ◽  
Author(s):  
Wang Qian ◽  
Wang Qiangde ◽  
Wei Chunling ◽  
Zhang Zhengqiang

The paper solves the problem of a decentralized adaptive state-feedback neural tracking control for a class of stochastic nonlinear high-order interconnected systems. Under the assumptions that the inverse dynamics of the subsystems are stochastic input-to-state stable (SISS) and for the controller design, Radial basis function (RBF) neural networks (NN) are used to cope with the packaged unknown system dynamics and stochastic uncertainties. Besides, the appropriate Lyapunov-Krosovskii functions and parameters are constructed for a class of large-scale high-order stochastic nonlinear strong interconnected systems with inverse dynamics. It has been proved that the actual controller can be designed so as to guarantee that all the signals in the closed-loop systems remain semi-globally uniformly ultimately bounded, and the tracking errors eventually converge in the small neighborhood of origin. Simulation example has been proposed to show the effectiveness of our results.


Sign in / Sign up

Export Citation Format

Share Document