plasticity rule
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Mouna Elhamdaoui ◽  
Faten Ouaja Rziga ◽  
Khaoula Mbarek ◽  
Kamel Besbes

Abstract Abstract Spike Time-Dependent Plasticity (STDP) represents an essential learning rule found in biological synapses which is recommended for replication in neuromorphic electronic systems. This rule is defined as a process of updating synaptic weight that depends on the time difference between the pre- and post-synaptic spikes. It is well known that pre-synaptic activity preceding post-synaptic activity may induce long term potentiation (LTP) whereas the reverse case induces long term depression (LTD). Memristors, which are two-terminal memory devices, are excellent candidates to implement such a mechanism due to their distinctive characteristics. In this article, we analyze the fundamental characteristics of three of the most known memristor models, and then we simulate it in order to mimic the plasticity rule of biological synapses. The tested models are the linear ion drift model (HP), the Voltage ThrEshold Adaptive Memristor (VTEAM) model and the Enhanced Generalized Memristor (EGM) model. We compare the I-V characteristics of these models with an experimental memristive device based on Ta2O5. We simulate and validate the STDP Hebbian learning algorithm proving the capability of each model to reproduce the conductance change for the LTP and LTD functions. Thus, our simulation results explore the most suitable model to operate as a synapse component for neuromorphic circuits.


2020 ◽  
Author(s):  
Basile Confavreux ◽  
Everton J. Agnes ◽  
Friedemann Zenke ◽  
Timothy Lillicrap ◽  
Tim P. Vogels

AbstractThe search for biologically faithful synaptic plasticity rules has resulted in a large body of models. They are usually inspired by – and fitted to – experimental data, but they rarely produce neural dynamics that serve complex functions. These failures suggest that current plasticity models are still under-constrained by existing data. Here, we present an alternative approach that uses meta-learning to discover plausible synaptic plasticity rules. Instead of experimental data, the rules are constrained by the functions they implement and the structure they are meant to produce. Briefly, we parameterize synaptic plasticity rules by a Volterra expansion and then use supervised learning methods (gradient descent or evolutionary strategies) to minimize a problem-dependent loss function that quantifies how effectively a candidate plasticity rule transforms an initially random network into one with the desired function. We first validate our approach by re-discovering previously described plasticity rules, starting at the single-neuron level and “Oja’s rule”, a simple Hebbian plasticity rule that captures the direction of most variability of inputs to a neuron (i.e., the first principal component). We expand the problem to the network level and ask the framework to find Oja’s rule together with an anti-Hebbian rule such that an initially random two-layer firing-rate network will recover several principal components of the input space after learning. Next, we move to networks of integrate-and-fire neurons with plastic inhibitory afferents. We train for rules that achieve a target firing rate by countering tuned excitation. Our algorithm discovers a specific subset of the manifold of rules that can solve this task. Our work is a proof of principle of an automated and unbiased approach to unveil synaptic plasticity rules that obey biological constraints and can solve complex functions.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Sabrina Tazerart ◽  
Diana E. Mitchell ◽  
Soledad Miranda-Rottmann ◽  
Roberto Araya

2018 ◽  
Author(s):  
Ulises Pereira ◽  
Nicolas Brunel

AbstractTwo strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be ‘learned’ by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism, such as multiplicative homeostatic plasticity, is necessary. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.


Sign in / Sign up

Export Citation Format

Share Document