scholarly journals Sparse synaptic connectivity is required for decorrelation and pattern separation in feedforward networks

2017 ◽  
Author(s):  
N. Alex Cayco-Gajic ◽  
Claudia Clopath ◽  
R. Angus Silver

AbstractPattern separation is a fundamental function of the brain. Divergent feedforward networks separate overlapping activity patterns by mapping them onto larger numbers of neurons, aiding learning in downstream circuits. However, the relationship between the synaptic connectivity within these circuits and their ability to separate patterns is poorly understood. To investigate this we built simplified and biologically detailed models of the cerebellar input layer and systematically varied the spatial correlation of their inputs and their synaptic connectivity. Performance was quantified by the learning speed of a classifier trained on either the mossy fiber input or granule cell output patterns. Our results establish that the extent of synaptic connectivity governs the pattern separation performance of feedforward networks by counteracting the beneficial effects of expanding coding space and threshold-mediated decorrelation. The sparse synaptic connectivity in the cerebellar input layer provides an optimal solution to this trade-off, enabling efficient pattern separation and faster learning.

Author(s):  
Qiongling Li ◽  
Shahin Tavakol ◽  
Jessica Royer ◽  
Sara Larivière ◽  
Reinder Vos De Wael ◽  
...  

AbstractEpisodic memory is our ability to remember past events accurately. Pattern separation, the process of of orthogonalizing similar aspects of external information into nonoverlapping representations, is one of its mechanisms. Converging evidence suggests a pivotal role of the hippocampus, in concert with neocortical areas, in this process. The current study aimed to identify principal dimensions of functional activation associated with pattern separation in hippocampal and neocortical areas, in both healthy individuals and patients with lesions to the hippocampus. Administering a pattern separation fMRI paradigm to a group of healthy adults, we detected task-related activation in bilateral hippocampal and distributed neocortical areas. Capitalizing on manifold learning techniques applied to parallel resting-state fMRI data, we could identify that hippocampal and neocortical activity patterns were efficiently captured by their principal gradients of intrinsic functional connectivity, which follows the hippocampal long axis and sensory-fugal cortical organization. Functional activation patterns and their alignment with these principal dimensions were altered in patients. Notably, inter-individual differences in the concordance between task-related activity and intrinsic functional gradients were correlated with pattern separation performance in both patients and controls. Our work outlines a parsimonious approach to capture the functional underpinnings of episodic memory processes at the systems level, and to decode functional reorganization in clinical populations.


2014 ◽  
Vol 26 (11) ◽  
pp. 2527-2540 ◽  
Author(s):  
Chad Giusti ◽  
Vladimir Itskov

It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of subtractive noise. When we coarsened the notion of combinatorial neural code to keep track of only maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or many-layer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by a subset of the form [Formula: see text], where [Formula: see text] is a polyhedron.


2021 ◽  
Author(s):  
Chong Guo ◽  
Stephanie Rudolph ◽  
Morgan E. Neuwirth ◽  
Wade G. Regehr

AbstractCircuitry of the cerebellar cortex is regionally and functionally specialized. Unipolar brush cells (UBCs), and Purkinje cell (PC) synapses made by axon collaterals in the granular layer, are both enriched in areas that control balance and eye-movement. Here we find a link between these specializations: PCs preferentially inhibit mGluR1-expressing UBCs that respond to mossy fiber inputs with long lasting increases in firing, but PCs do not inhibit mGluR1-lacking UBCs. PCs inhibit about 29% of mGluR1-expressing UBCs by activating GABAA receptors (GABAARs) and inhibit almost all mGluR1-expressing UBCs by activating GABABRs. PC to UBC synapses allow PC output to regulate the input layer of the cerebellar cortex in diverse ways. GABAAR-mediated feedback is fast, unreliable, noisy, and suited to linearizing input-output curves and decreasing gain. Slow GABABR-mediated inhibition allows elevated PC activity to sharpen the input-output transformation of UBCs, and allows dynamic inhibitory feedback of mGluR1-expressing UBCs.


Hippocampus ◽  
2017 ◽  
Vol 27 (6) ◽  
pp. 716-725 ◽  
Author(s):  
Rachel Clark ◽  
Asli C. Tahan ◽  
Patrick D. Watson ◽  
Joan Severson ◽  
Neal J. Cohen ◽  
...  

1993 ◽  
Vol 5 (1) ◽  
pp. 105-114 ◽  
Author(s):  
Gustavo Deco ◽  
Jürgen Ebmeyer

In recent years localized receptive fields have been the subject of intensive research, due to their learning speed and efficient reconstruction of hypersurfaces. A very efficient implementation for such a network was proposed recently by Platt (1991). This resource-allocating network (RAN) allocates a new neuron whenever an unknown pattern is presented at its input layer. In this paper we introduce a new network architecture and learning paradigm. The aim of our approach is to incorporate "coarse coding" to the resource-allocating network. The network presented here provides for each input coordinate a separate layer, which consists of one-dimensional, locally tuned gaussian neurons. In the following layer multidimensional receptive fields are built by using pi-neurons. Linear neurons aggregate the outputs of the pi-neurons in order to approximate the required input-output mapping. The learning process follows the ideas of the resource-allocating network of Platt but due to the extended architecture of our network other improvements of the learning process had to be defined. Compared to the resource-allocating network a more compact network with comparable accuracy is obtained.


Sign in / Sign up

Export Citation Format

Share Document