scholarly journals SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training

2021 ◽  
Vol 15 ◽  
Author(s):  
Fangxin Liu ◽  
Wenbo Zhao ◽  
Yongbiao Chen ◽  
Zongwu Wang ◽  
Tao Yang ◽  
...  

Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 25~32× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.3~37.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP.

2021 ◽  
Vol 17 (5) ◽  
pp. e1008958
Author(s):  
Alan Eric Akil ◽  
Robert Rosenbaum ◽  
Krešimir Josić

The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.


2008 ◽  
Vol 18 (12) ◽  
pp. 3611-3624 ◽  
Author(s):  
H. L. WEI ◽  
S. A. BILLINGS

Particle swarm optimization (PSO) is introduced to implement a new constructive learning algorithm for training generalized cellular neural networks (GCNNs) for the identification of spatio-temporal evolutionary (STE) systems. The basic idea of the new PSO-based learning algorithm is to successively approximate the desired signal by progressively pursuing relevant orthogonal projections. This new algorithm will thus be referred to as the orthogonal projection pursuit (OPP) algorithm, which is in mechanism similar to the conventional projection pursuit approach. A novel two-stage hybrid training scheme is proposed for constructing a parsimonious GCNN model. In the first stage, the orthogonal projection pursuit algorithm is applied to adaptively and successively augment the network, where adjustable parameters of the associated units are optimized using a particle swarm optimizer. The resultant network model produced at the first stage may be redundant. In the second stage, a forward orthogonal regression (FOR) algorithm, aided by mutual information estimation, is applied to refine and improve the initially trained network. The effectiveness and performance of the proposed method is validated by applying the new modeling framework to a spatio-temporal evolutionary system identification problem.


1999 ◽  
Vol 11 (5) ◽  
pp. 1069-1077 ◽  
Author(s):  
Danilo P. Mandic ◽  
Jonathon A. Chambers

A relationship between the learning rate η in the learning algorithm, and the slope β in the nonlinear activation function, for a class of recurrent neural networks (RNNs) trained by the real-time recurrent learning algorithm is provided. It is shown that an arbitrary RNN can be obtained via the referent RNN, with some deterministic rules imposed on its weights and the learning rate. Such relationships reduce the number of degrees of freedom when solving the nonlinear optimization task of finding the optimal RNN parameters.


2019 ◽  
Author(s):  
D. Gabrieli ◽  
Samantha N. Schumm ◽  
B. Parvesse ◽  
D.F. Meaney

AbstractTraumatic brain injury (TBI) can lead to neurodegeneration in the injured circuitry, either through primary structural damage to the neuron or secondary effects that disrupt key cellular processes. Moreover, traumatic injuries can preferentially impact subpopulations of neurons, but the functional network effects of these targeted degeneration profiles remain unclear. Although isolating the consequences of complex injury dynamics and long-term recovery of the circuit can be difficult to control experimentally, computational networks can be a powerful tool to analyze the consequences of injury. Here, we use the Izhikevich spiking neuron model to create networks representative of cortical tissue. After an initial settling period with spike-timing-dependent plasticity (STDP), networks developed rhythmic oscillations similar to those seenin vivo. As neurons were sequentially removed from the network, population activity rate and oscillation dynamics were significantly reduced. In a successive period of network restructuring with STDP, network activity levels were returned to baseline for some injury levels and oscillation dynamics significantly improved. We next explored the role that specific neurons have in the creation and termination of oscillation dynamics. We determined that oscillations initiate from activation of low firing rate neurons with limited structural inputs. To terminate oscillations, high activity excitatory neurons with strong input connectivity activate downstream inhibitory circuitry. Finally, we confirm the excitatory neuron population role through targeted neurodegeneration. These results suggest targeted neurodegeneration can play a key role in the oscillation dynamics after injury.Author SummaryIn this study, we study the impact of neuronal degeneration – a process that commonly occurs after traumatic injury and neurodegenerative disease – on the neuronal dynamics in a cortical network. We create computational models of neural networks and include spike timing plasticity to alter the synaptic strength among connections as networks remodel after simulated injury. We find that spike-timing dependent plasticity helps recover the neural dynamics of an injured microcircuit, but it frequently cannot recover the original oscillation dynamics in an uninjured network. In addition, we find that selectively injuring excitatory neurons with the highest firing rate reduced the neuronal oscillations in a circuit much more than either random deletion or the removing neurons with the lowest firing rate. In all, these data suggest (a) plasticity reduces the consequences of neurodegeneration and (b) losing the most active neurons in the network has the most adverse effect on neural oscillations.


Sign in / Sign up

Export Citation Format

Share Document