scholarly journals Synaptic Plasticity in Correlated Balanced Networks

2020 ◽  
Author(s):  
Alan Eric Akil ◽  
Robert Rosenbaum ◽  
Krešimir Josić

AbstractThe dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory– inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How does the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a general theory of plasticity in balanced networks. We show that balance can be attained and maintained under plasticity induced weight changes. We find that correlations in the input mildly, but significantly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.

2021 ◽  
Vol 17 (5) ◽  
pp. e1008958
Author(s):  
Alan Eric Akil ◽  
Robert Rosenbaum ◽  
Krešimir Josić

The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.


2018 ◽  
Author(s):  
Florence I. Kleberg ◽  
Jochen Triesch

AbstractSynapses between cortical neurons are subject to constant modifications through synaptic plasticity mechanisms, which are believed to underlie learning and memory formation. The strengths of excitatory and inhibitory synapses in the cortex follow a right-skewed long-tailed distribution. Similarly, the firing rates of excitatory and inhibitory neurons also follow a right-skewed long-tailed distribution. How these distributions come about and how they maintain their shape over time is currently not well understood. Here we propose a spiking neural network model that explains the origin of these distributions as a consequence of the interaction of spike-timing dependent plasticity (STDP) of excitatory and inhibitory synapses and a multiplicative form of synaptic normalisation. Specifically, we show that the combination of additive STDP and multiplicative normalisation leads to lognormal-like distributions of excitatory and inhibitory synaptic efficacies as observed experimentally. The shape of these distributions remains stable even if spontaneous fluctuations of synaptic efficacies are added. In the same network, lognormal-like distributions of the firing rates of excitatory and inhibitory neurons result from small variability in the spiking thresholds of individual neurons. Interestingly, we find that variation in firing rates is strongly coupled to variation in synaptic efficacies: neurons with the highest firing rates develop very strong connections onto other neurons. Finally, we define an impact measure for individual neurons and demonstrate the existence of a small group of neurons with an exceptionally strong impact on the network, that arise as a result of synaptic plasticity. In summary, synaptic plasticity and small variability in neuronal parameters underlie a neural oligarchy in recurrent neural networks.Author summaryOur brain’s neural networks are composed of billions of neurons that exchange signals via trillions of synapses. Are these neurons created equal, or do they contribute in similar ways to the network dynamics? Or do some neurons wield much more power than others? Recent experiments have shown that some neurons are much more active than the average neuron and that some synaptic connections are much stronger than the average synaptic connection. However, it is still unclear how these properties come about in the brain. Here we present a neural network model that explains these findings as a result of the interaction of synaptic plasticity mechanisms that modify synapses’ efficacies. The model reproduces recent findings on the statistics of neuronal firing rates and synaptic efficacies and predicts a small class of neurons with exceptionally high impact on the network dynamics. Such neurons may play a key role in brain disorders such as epilepsy.


Author(s):  
Júlia V. Gallinaro ◽  
Nebojša Gašparović ◽  
Stefan Rotter

AbstractBrain networks store new memories using functional and structural synaptic plasticity. Memory formation is generally attributed to Hebbian plasticity, while homeostatic plasticity is thought to have an ancillary role in stabilizing network dynamics. Here we report that homeostatic plasticity alone can also lead to the formation of stable memories. We analyze this phenomenon using a new theory of network remodeling, combined with numerical simulations of recurrent spiking neural networks that exhibit structural plasticity based on firing rate homeostasis. These networks are able to store repeatedly presented patterns and recall them upon the presentation of incomplete cues. Storing is fast, governed by the homeostatic drift. In contrast, forgetting is slow, driven by a diffusion process. Joint stimulation of neurons induces the growth of associative connections between them, leading to the formation of memory engrams. In conclusion, homeostatic structural plasticity induces a specific type of “silent memories”, different from conventional attractor states.


Author(s):  
Serkan Kiranyaz ◽  
Junaid Malik ◽  
Habib Ben Abdallah ◽  
Turker Ince ◽  
Alexandros Iosifidis ◽  
...  

AbstractThe recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a generalized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, the default search method to find optimal operators in ONNs, the so-called Greedy Iterative Search (GIS) method, usually takes several training sessions to find a single operator set per layer. This is not only computationally demanding, also the network heterogeneity is limited since the same set of operators will then be used for all neurons in each layer. To address this deficiency and exploit a superior level of heterogeneity, in this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the “Synaptic Plasticity” paradigm that poses the essential learning theory in biological neurons. During training, each operator set in the library can be evaluated by their synaptic plasticity level, ranked from the worst to the best, and an “elite” ONN can then be configured using the top-ranked operator sets found at each hidden layer. Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs and as a result, the performance gap over the CNNs further widens.


Author(s):  
Александр Александрович Воевода ◽  
Дмитрий Олегович Романников

Синтез регуляторов для многоканальных систем - актуальная и сложная задача. Одним из возможных способов синтеза является применение нейронных сетей. Нейронный регулятор либо обучают на предварительно рассчитанных данных, либо используют для настройки параметров ПИД-регулятора из начального устойчивого положения замкнутой системы. Предложено использовать нейронные сети для регулирования двухканального объекта, при этом обучение будет выполняться из неустойчивого (произвольного) начального положения с применением методов обучения нейронных сетей с подкреплением. Предложена структура нейронной сети и замкнутой системы, в которой уставка задается при помощи входного параметра нейронной сети регулятора The problem for synthesis of automatic control systems is hard, especially for multichannel objects. One of the approaches is the use of neural networks. For the approaches that are based on the use of reinforcement learning, there is an additional issue - supporting of range of values for the set points. The method of synthesis of automatic control systems using neural networks and the process of its learning with reinforcement learning that allows neural networks learning for supporting regulation is proposed in the predefined range of set points. The main steps of the method are 1) to form a neural net input as a state of the object and system set point; 2) to perform modelling of the system with a set of randomly generated set points from the desired range; 3) to perform a one-step of the learning using the Deterministic Policy Gradient method. The originality of the proposed method is that, in contrast to existing methods of using a neural network to synthesize a controller, the proposed method allows training a controller from an unstable initial state in a closed system and set of a range of set points. The method was applied to the problem of stabilizing the outputs of a two-channel object, for which stabilization both outputs and the first near the input set point is required


2004 ◽  
Vol 213 ◽  
pp. 483-486
Author(s):  
David Brodrick ◽  
Douglas Taylor ◽  
Joachim Diederich

A recurrent neural network was trained to detect the time-frequency domain signature of narrowband radio signals against a background of astronomical noise. The objective was to investigate the use of recurrent networks for signal detection in the Search for Extra-Terrestrial Intelligence, though the problem is closely analogous to the detection of some classes of Radio Frequency Interference in radio astronomy.


2003 ◽  
Vol 15 (8) ◽  
pp. 1897-1929 ◽  
Author(s):  
Barbara Hammer ◽  
Peter Tiňo

Recent experimental studies indicate that recurrent neural networks initialized with “small” weights are inherently biased toward definite memory machines (Tiňno, Čerňanský, & Beňušková, 2002a, 2002b). This article establishes a theoretical counterpart: transition function of recurrent network with small weights and squashing activation function is a contraction. We prove that recurrent networks with contractive transition function can be approximated arbitrarily well on input sequences of unbounded length by a definite memory machine. Conversely, every definite memory machine can be simulated by a recurrent network with contractive transition function. Hence, initialization with small weights induces an architectural bias into learning with recurrent neural networks. This bias might have benefits from the point of view of statistical learning theory: it emphasizes one possible region of the weight space where generalization ability can be formally proved. It is well known that standard recurrent neural networks are not distribution independent learnable in the probably approximately correct (PAC) sense if arbitrary precision and inputs are considered. We prove that recurrent networks with contractive transition function with a fixed contraction parameter fulfill the so-called distribution independent uniform convergence of empirical distances property and hence, unlike general recurrent networks, are distribution independent PAC learnable.


2013 ◽  
Vol 25 (7) ◽  
pp. 1768-1806 ◽  
Author(s):  
N. Alex Cayco-Gajic ◽  
Eric Shea-Brown

Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarifying their origin and relationship in random, feedforward networks of McCullochs-Pitts neurons. Two key parameters are the thresholds at which neurons fire spikes and the overall level of feedforward connectivity. When neurons have low thresholds, we show that there is always a connectivity for which the properties in question all occur, that is, these networks preserve overall firing rates from layer to layer and produce broad distributions of activity in each layer. This fails to occur, however, when neurons have high thresholds. A key tool in explaining this difference is the eigenstructure of the resulting mean-field Markov chain, as this reveals which activity modes will be preserved from layer to layer. We extend our analysis from purely excitatory networks to more complex models that include inhibition and local noise, and find that both of these features extend the parameter ranges over which networks produce the properties of interest.


Sign in / Sign up

Export Citation Format

Share Document