scholarly journals Pruning of Deep Spiking Neural Networks through Gradient Rewiring

Author(s):  
Yanqi Chen ◽  
Zhaofei Yu ◽  
Wei Fang ◽  
Tiejun Huang ◽  
Yonghong Tian

Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips. As these chips are usually resource-constrained, the compression of SNNs is thus crucial along the road of practical use of SNNs. Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs, thus limiting the performance of the pruned SNNs. Besides, these methods are only suitable for shallow SNNs. In this paper, inspired by synaptogenesis and synapse elimination in the neural system, we propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retraining. Our key innovation is to redefine the gradient to a new synaptic parameter, allowing better exploration of network structures by taking full advantage of the competition between pruning and regrowth of connections. The experimental results show that the proposed method achieves minimal loss of SNNs' performance on MNIST and CIFAR-10 datasets so far. Moreover, it reaches a ~3.5% accuracy loss under unprecedented 0.73% connectivity, which reveals remarkable structure refining capability in SNNs. Our work suggests that there exists extremely high redundancy in deep SNNs. Our codes are available at https://github.com/Yanqi-Chen/Gradient-Rewiring.

2021 ◽  
pp. 1-13
Author(s):  
Qiugang Zhan ◽  
Guisong Liu ◽  
Xiurui Xie ◽  
Guolin Sun ◽  
Huajin Tang

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xianghong Lin ◽  
Mengwei Zhang ◽  
Xiangwen Wang

As a new brain-inspired computational model of artificial neural networks, spiking neural networks transmit and process information via precisely timed spike trains. Constructing efficient learning methods is a significant research field in spiking neural networks. In this paper, we present a supervised learning algorithm for multilayer feedforward spiking neural networks; all neurons can fire multiple spikes in all layers. The feedforward network consists of spiking neurons governed by biologically plausible long-term memory spike response model, in which the effect of earlier spikes on the refractoriness is not neglected to incorporate adaptation effects. The gradient descent method is employed to derive synaptic weight updating rule for learning spike trains. The proposed algorithm is tested and verified on spatiotemporal pattern learning problems, including a set of spike train learning tasks and nonlinear pattern classification problems on four UCI datasets. Simulation results indicate that the proposed algorithm can improve learning accuracy in comparison with other supervised learning algorithms.


2013 ◽  
Vol 347-350 ◽  
pp. 2270-2274
Author(s):  
Dai Yuan Zhang

A new kind of shape control learning algorithm (SCLA) for training neural networks is proposed. We use the rational cubic spline (with quadratic denominator) to implement a new neural system for shape control, and construct a new kind of artificial neural networks based on given patterns. The shape can be controlled by some shape parameters, which is much different from the known algorithms for training neural networks. The numerical experiments indicate that the new method proposed in this paper demonstrates good results.


2013 ◽  
Vol 25 (2) ◽  
pp. 473-509 ◽  
Author(s):  
Ioana Sporea ◽  
André Grüning

We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.


Sign in / Sign up

Export Citation Format

Share Document