An Improved Supervised Learning Algorithm Using Triplet-Based Spike-Timing-Dependent Plasticity

Author(s):  
Xianghong Lin ◽  
Guojun Chen ◽  
Xiangwen Wang ◽  
Huifang Ma
2021 ◽  
Vol 11 (5) ◽  
pp. 2059
Author(s):  
Sungmin Hwang ◽  
Hyungjin Kim ◽  
Byung-Gook Park

A hardware-based spiking neural network (SNN) has attracted many researcher’s attention due to its energy-efficiency. When implementing the hardware-based SNN, offline training is most commonly used by which trained weights by a software-based artificial neural network (ANN) are transferred to synaptic devices. However, it is time-consuming to map all the synaptic weights as the scale of the neural network increases. In this paper, we propose a method for quantized weight transfer using spike-timing-dependent plasticity (STDP) for hardware-based SNN. STDP is an online learning algorithm for SNN, but we utilize it as the weight transfer method. Firstly, we train SNN using the Modified National Institute of Standards and Technology (MNIST) dataset and perform weight quantization. Next, the quantized weights are mapped to the synaptic devices using STDP, by which all the synaptic weights connected to a neuron are transferred simultaneously, reducing the number of pulse steps. The performance of the proposed method is confirmed, and it is demonstrated that there is little reduction in the accuracy at more than a certain level of quantization, but the number of pulse steps for weight transfer substantially decreased. In addition, the effect of the device variation is verified.


2021 ◽  
Vol 15 ◽  
Author(s):  
Biswadeep Chakraborty ◽  
Saibal Mukhopadhyay

A Spiking Neural Network (SNN) is trained with Spike Timing Dependent Plasticity (STDP), which is a neuro-inspired unsupervised learning method for various machine learning applications. This paper studies the generalizability properties of the STDP learning processes using the Hausdorff dimension of the trajectories of the learning algorithm. The paper analyzes the effects of STDP learning models and associated hyper-parameters on the generalizability properties of an SNN. The analysis is used to develop a Bayesian optimization approach to optimize the hyper-parameters for an STDP model for improving the generalizability properties of an SNN.


2006 ◽  
Vol 18 (6) ◽  
pp. 1318-1348 ◽  
Author(s):  
Jean-Pascal Pfister ◽  
Taro Toyoizumi ◽  
David Barber ◽  
Wulfram Gerstner

In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed.


2019 ◽  
Vol 13 ◽  
Author(s):  
Zhenshan Bing ◽  
Ivan Baumann ◽  
Zhuangyi Jiang ◽  
Kai Huang ◽  
Caixia Cai ◽  
...  

2019 ◽  
Vol 9 (4) ◽  
pp. 283-291 ◽  
Author(s):  
Sou Nobukawa ◽  
Haruhiko Nishimura ◽  
Teruya Yamanishi

Abstract Many recent studies have applied to spike neural networks with spike-timing-dependent plasticity (STDP) to machine learning problems. The learning abilities of dopamine-modulated STDP (DA-STDP) for reward-related synaptic plasticity have also been gathering attention. Following these studies, we hypothesize that a network structure combining self-organized STDP and reward-related DA-STDP can solve the machine learning problem of pattern classification. Therefore, we studied the ability of a network in which recurrent spiking neural networks are combined with STDP for non-supervised learning, with an output layer joined by DA-STDP for supervised learning, to perform pattern classification. We confirmed that this network could perform pattern classification using the STDP effect for emphasizing features of the input spike pattern and DA-STDP supervised learning. Therefore, our proposed spiking neural network may prove to be a useful approach for machine learning problems.


Sign in / Sign up

Export Citation Format

Share Document