spike time dependent plasticity
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 15)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Qi Qin ◽  
Miaocheng Zhang ◽  
Suhao Yao ◽  
Xingyu Chen ◽  
Aoze Han ◽  
...  

Abstract In the Post-Moore Era, the neuromorphic computing has been mainly focused on breaking the von Neumann bottlenecks. Memristor has been proposed as a key part for the neuromorphic computing architectures, which can be used to emulate the synaptic plasticities of human brain. Ferroelectric memristor is a breakthrough for memristive devices on account of its reliable-nonvolatile storage, low-write/read latency, and tunable-conductive states. However, among the reported ferroelectric memristors, the mechanisms of resistive-switching are still under debate. In addition, the research of emulation of the brain synapses using ferroelectric memristors needs to be further investigated. Herein, the Cu/PbZr0.52Ti0.48O3 (PZT)/Pt ferroelectric memristors have been fabricated. The devices are able to realize the transformation from threshold switching behaviors to resistive switching behaviors. The synaptic plasticities, including excitatory post-synaptic current (EPSC), paired-pulse facilitation (PPF), paired-pulse depression (PPD), and spike time-dependent plasticity (STDP) have been mimicked by the PZT devices. Furthermore, the mechanisms of PZT devices based on the interface barrier and conductive filament models have been investigated by first-principles calculation. This work may contribute to the applications of ferroelectric memristors in neuromorphic computing systems.


2021 ◽  
Vol 15 ◽  
Author(s):  
Fangxin Liu ◽  
Wenbo Zhao ◽  
Yongbiao Chen ◽  
Zongwu Wang ◽  
Tao Yang ◽  
...  

Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 25~32× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.3~37.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP.


2021 ◽  
Author(s):  
Upasana Sahu ◽  
Kushaagra Goyal ◽  
Debanjan Bhowmik

We trained <b>Spiking neural network </b>(SNN) using <b>spike time dependent plasticity (STDP)</b>-enabled learning under two different learning schemes in <b>MNIST data set</b>(hand written digit recognition). We showed how the post-neurons need to be far more in number than the output classes for larger data sets in the case of SNN for reasonably high accuracy number. We have also reported the net energy consumed for learning in the spintronic devices and associated transistor-based circuits that enable synaptic functionality for this SNN.


2021 ◽  
Author(s):  
Upasana Sahu ◽  
Kushaagra Goyal ◽  
Debanjan Bhowmik

We trained <b>Spiking neural network </b>(SNN) using <b>spike time dependent plasticity (STDP)</b>-enabled learning under two different learning schemes in <b>MNIST data set</b>(hand written digit recognition). We showed how the post-neurons need to be far more in number than the output classes for larger data sets in the case of SNN for reasonably high accuracy number. We have also reported the net energy consumed for learning in the spintronic devices and associated transistor-based circuits that enable synaptic functionality for this SNN.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Samaneh Alsadat Saeedinia ◽  
Mohammad Reza Jahed-Motlagh ◽  
Abbas Tafakhori ◽  
Nikola Kasabov

AbstractThis paper proposes a novel method and algorithms for the design of MRI structured personalized 3D spiking neural network models (MRI-SNN) for a better analysis, modeling, and prediction of EEG signals. It proposes a novel gradient-descent learning algorithm integrated with a spike-time-dependent-plasticity algorithm. The models capture informative personal patterns of interaction between EEG channels, contrary to single EEG signal modeling methods or to spike-based approaches which do not use personal MRI data to pre-structure a model. The proposed models can not only learn and model accurately measured EEG data, but they can also predict signals at 3D model locations that correspond to non-monitored brain areas, e.g. other EEG channels, from where data has not been collected. This is the first study in this respect. As an illustration of the method, personalized MRI-SNN models are created and tested on EEG data from two subjects. The models result in better prediction accuracy and a better understanding of the personalized EEG signals than traditional methods due to the MRI and EEG information integration. The models are interpretable and facilitate a better understanding of related brain processes. This approach can be applied for personalized modeling, analysis, and prediction of EEG signals across brain studies such as the study and prediction of epilepsy, peri-perceptual brain activities, brain-computer interfaces, and others.


2021 ◽  
Vol 17 (5) ◽  
pp. e1008958
Author(s):  
Alan Eric Akil ◽  
Robert Rosenbaum ◽  
Krešimir Josić

The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.


2021 ◽  
Author(s):  
Mouna Elhamdaoui ◽  
Faten Ouaja Rziga ◽  
Khaoula Mbarek ◽  
Kamel Besbes

Abstract Abstract Spike Time-Dependent Plasticity (STDP) represents an essential learning rule found in biological synapses which is recommended for replication in neuromorphic electronic systems. This rule is defined as a process of updating synaptic weight that depends on the time difference between the pre- and post-synaptic spikes. It is well known that pre-synaptic activity preceding post-synaptic activity may induce long term potentiation (LTP) whereas the reverse case induces long term depression (LTD). Memristors, which are two-terminal memory devices, are excellent candidates to implement such a mechanism due to their distinctive characteristics. In this article, we analyze the fundamental characteristics of three of the most known memristor models, and then we simulate it in order to mimic the plasticity rule of biological synapses. The tested models are the linear ion drift model (HP), the Voltage ThrEshold Adaptive Memristor (VTEAM) model and the Enhanced Generalized Memristor (EGM) model. We compare the I-V characteristics of these models with an experimental memristive device based on Ta2O5. We simulate and validate the STDP Hebbian learning algorithm proving the capability of each model to reproduce the conductance change for the LTP and LTD functions. Thus, our simulation results explore the most suitable model to operate as a synapse component for neuromorphic circuits.


2021 ◽  
Vol 15 ◽  
Author(s):  
Paolo G. Cachi ◽  
Sebastián Ventura ◽  
Krzysztof J. Cios

In this paper we present a Competitive Rate-Based Algorithm (CRBA) that approximates operation of a Competitive Spiking Neural Network (CSNN). CRBA is based on modeling of the competition between neurons during a sample presentation, which can be reduced to ranking of the neurons based on a dot product operation and the use of a discrete Expectation Maximization algorithm; the latter is equivalent to the spike time-dependent plasticity rule. CRBA's performance is compared with that of CSNN on the MNIST and Fashion-MNIST datasets. The results show that CRBA performs on par with CSNN, while using three orders of magnitude less computational time. Importantly, we show that the weights and firing thresholds learned by CRBA can be used to initialize CSNN's parameters that results in its much more efficient operation.


2021 ◽  
Author(s):  
Kaidi Shao ◽  
Juan F Ramirez Villegas ◽  
Nikos K Logothetis ◽  
Michel Besserve

During sleep, cortical network connectivity likely undergoes both synaptic potentiation and depression through system consolidation and homeostatic processes. However, how these modifications are coordinated across sleep stages remains largely unknown. Candidate mechanisms are Ponto-Geniculo-Occipital (PGO) waves, propagating across several structures during Rapid Eye Movement (REM) sleep and the transitional stage from non-REM sleep to REM sleep (pre-REM), and exhibiting sleep stage-specific dynamic patterns. To understand their impact on cortical plasticity, we built an acetylcholine-modulated neural mass model of PGO wave propagation through pons, thalamus and cortex, reproducing a broad range of electrophysiological characteristics across sleep stages. Using a population model of Spike-Time-Dependent Plasticity, we show that recurrent cortical circuits in different transient regimes depending on the sleep stage with different impacts on plasticity. Specifically, this leads to the potentiation of cortico-cortical synapses during pre-REM, and to their depression during REM sleep. Overall, our results provide a new view on how transient sleep events and their associated sleep stage may implement a precise control of system-wide plastic changes.


2021 ◽  
Author(s):  
Carlos Wert-Carvajal ◽  
Melissa Reneaux ◽  
Tatjana Tchumatchenko ◽  
Claudia Clopath

AbstractDopamine and serotonin are important modulators of synaptic plasticity and their action has been linked to our ability to learn the positive or negative outcomes or valence learning. In the hippocampus, both neuromodulators affect long-term synaptic plasticity but play different roles in the encoding of uncertainty or predicted reward. Here, we examine the differential role of these modulators on learning speed and cognitive flexibility in a navigational model. We compare two reward-modulated spike time-dependent plasticity (R-STDP) learning rules to describe the action of these neuromodulators. Our results show that the interplay of dopamine (DA) and serotonin (5-HT) improves overall learning performance and can explain experimentally reported differences in spatial task performance. Furthermore, this system allows us to make predictions regarding spatial reversal learning.


Sign in / Sign up

Export Citation Format

Share Document