scholarly journals Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks

Author(s):  
Jianhao Ding ◽  
Zhaofei Yu ◽  
Yonghong Tian ◽  
Tiejun Huang

Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural networks, have attracted great attentions from researchers and industry. The most efficient way to train deep SNNs is through ANN-SNN conversion. However, the conversion usually suffers from accuracy loss and long inference time, which impede the practical application of SNN. In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion. To better correlate ANN-SNN and get greater accuracy, we propose Rate Norm Layer to replace the ReLU activation function in source ANN training, enabling direct conversion from a trained ANN to an SNN. Moreover, we propose an optimal fit curve to quantify the fit between the activation value of source ANN and the actual firing rate of target SNN. We show that the inference time can be reduced by optimizing the upper bound of the fit curve in the revised ANN to achieve fast inference. Our theory can explain the existing work on fast reasoning and get better results. The experimental results show that the proposed method achieves near loss-less conversion with VGG-16, PreActResNet-18, and deeper structures. Moreover, it can reach 8.6× faster reasoning performance under 0.265× energy consumption of the typical method. The code is available at https://github.com/DingJianhao/OptSNNConvertion-RNL-RIL.

2019 ◽  
Vol 64 (5) ◽  
pp. 519-528
Author(s):  
Yunhua Chen ◽  
Jin Du ◽  
Qian Liu ◽  
Ling Zhang ◽  
Yanjun Zeng

Abstract To improve the robustness and to reduce the energy consumption of facial expression recognition, this study proposed a facial expression recognition method based on improved deep residual networks (ResNets). Residual learning has solved the degradation problem of deep Convolutional Neural Networks (CNNs); therefore, in theory, a ResNet can consist of infinite number of neural layers. On the one hand, ResNets benefit from better performance on artificial intelligence (AI) tasks, thanks to its deeper network structure; meanwhile, on the other hand, it faces a severe problem of energy consumption, especially on mobile devices. Hence, this study employs a novel activation function, the Noisy Softplus (NSP), to replace rectified linear units (ReLU) to get improved ResNets. NSP is a biologically plausible activation function, which was first proposed in training Spiking Neural Networks (SNNs); thus, NSP-trained models can be directly implemented on ultra-low-power neuromorphic hardware. We built an 18-layered ResNet using NSP to perform facial expression recognition across datasets Cohn-Kanade (CK+), Karolinska Directed Emotional Faces (KDEF) and GENKI-4K. The results achieved better anti-noise ability than ResNet using the activation function ReLU and showed low energy consumption running on neuromorphic hardware. This study not only contributes a solution for robust facial expression recognition, but also consolidates the low energy cost of their implementation on neuromorphic devices, which could pave the way for high-performance, noise-robust and energy-efficient vision applications on mobile hardware.


2021 ◽  
Author(s):  
Ceca Kraišniković ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware – neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


2021 ◽  
pp. 182-194
Author(s):  
Yihao Luo ◽  
Min Xu ◽  
Caihong Yuan ◽  
Xiang Cao ◽  
Liangqi Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document