POSTER: Bridge the Gap Between Neural Networks and Neuromorphic Hardware

Author(s):  
Yu Ji ◽  
YouHui Zhang ◽  
WenGuang Chen ◽  
Yuan Xie
Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3240
Author(s):  
Tehreem Syed ◽  
Vijay Kakani ◽  
Xuenan Cui ◽  
Hakil Kim

In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training spiking models is still considered as a tedious task. Until recently, various ANN to SNN conversion methods in the literature have been proposed to train deep SNN models. Nevertheless, these methods require hundreds to thousands of time-steps for training and still cannot attain good SNN performance. This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks. In this current study, the training is carried out using deep convolutional spiking neural networks with surrogate gradient descent backpropagation in a customized layer architecture similar to deep artificial neural networks. Moreover, this work also proposes fewer time-steps for training SNNs with surrogate gradient descent. During the training with surrogate gradient descent backpropagation, overfitting problems have been encountered. To overcome these problems, this work refines the SNN based dropout technique with surrogate gradient descent. The proposed customized SNN models achieve good classification results on both private and public datasets. In this work, several experiments have been carried out on an embedded platform (NVIDIA JETSON TX2 board), where the deployment of customized SNN models has been extensively conducted. Performance validations have been carried out in terms of processing time and inference accuracy between PC and embedded platforms, showing that the proposed customized models and training techniques are feasible for achieving a better performance on various datasets such as CIFAR-10, MNIST, SVHN, and private KITTI and Korean License plate dataset.


2021 ◽  
Vol 11 (2) ◽  
pp. 23
Author(s):  
Duy-Anh Nguyen ◽  
Xuan-Tu Tran ◽  
Francesca Iacopi

Deep Learning (DL) has contributed to the success of many applications in recent years. The applications range from simple ones such as recognizing tiny images or simple speech patterns to ones with a high level of complexity such as playing the game of Go. However, this superior performance comes at a high computational cost, which made porting DL applications to conventional hardware platforms a challenging task. Many approaches have been investigated, and Spiking Neural Network (SNN) is one of the promising candidates. SNN is the third generation of Artificial Neural Networks (ANNs), where each neuron in the network uses discrete spikes to communicate in an event-based manner. SNNs have the potential advantage of achieving better energy efficiency than their ANN counterparts. While generally there will be a loss of accuracy on SNN models, new algorithms have helped to close the accuracy gap. For hardware implementations, SNNs have attracted much attention in the neuromorphic hardware research community. In this work, we review the basic background of SNNs, the current state and challenges of the training algorithms for SNNs and the current implementations of SNNs on various hardware platforms.


2021 ◽  
Author(s):  
Ceca Kraišniković ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware – neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


2015 ◽  
Vol 59 (2) ◽  
pp. 1-5 ◽  
Author(s):  
Juncheng Shen ◽  
De Ma ◽  
Zonghua Gu ◽  
Ming Zhang ◽  
Xiaolei Zhu ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Youngeun Kim ◽  
Priyadarshini Panda

Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning owing to sparse, asynchronous and binary event (or spike) driven processing, that can yield huge energy efficiency benefits on neuromorphic hardware. However, SNNs convey temporally-varying spike activation through time that is likely to induce a large variation of forward activation and backward gradients, resulting in unstable training. To address this training issue in SNNs, we revisit Batch Normalization (BN) and propose a temporal Batch Normalization Through Time (BNTT) technique. Different from previous BN techniques with SNNs, we find that varying the BN parameters at every time-step allows the model to learn the time-varying input distribution better. Specifically, our proposed BNTT decouples the parameters in a BNTT layer along the time axis to capture the temporal dynamics of spikes. We demonstrate BNTT on CIFAR-10, CIFAR-100, Tiny-ImageNet, event-driven DVS-CIFAR10 datasets, and Sequential MNIST and show near state-of-the-art performance. We conduct comprehensive analysis on the temporal characteristic of BNTT and showcase interesting benefits toward robustness against random and adversarial noise. Further, by monitoring the learnt parameters of BNTT, we find that we can do temporal early exit. That is, we can reduce the inference latency by ~5 − 20 time-steps from the original training latency. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/BNTT-Batch-Normalization-Through-Time.


2021 ◽  
Vol 15 ◽  
Author(s):  
Chenglong Zou ◽  
Xiaoxin Cui ◽  
Yisong Kuang ◽  
Kefei Liu ◽  
Yuan Wang ◽  
...  

Artificial neural networks (ANNs), like convolutional neural networks (CNNs), have achieved the state-of-the-art results for many machine learning tasks. However, inference with large-scale full-precision CNNs must cause substantial energy consumption and memory occupation, which seriously hinders their deployment on mobile and embedded systems. Highly inspired from biological brain, spiking neural networks (SNNs) are emerging as new solutions because of natural superiority in brain-like learning and great energy efficiency with event-driven communication and computation. Nevertheless, training a deep SNN remains a main challenge and there is usually a big accuracy gap between ANNs and SNNs. In this paper, we introduce a hardware-friendly conversion algorithm called “scatter-and-gather” to convert quantized ANNs to lossless SNNs, where neurons are connected with ternary {−1,0,1} synaptic weights. Each spiking neuron is stateless and more like original McCulloch and Pitts model, because it fires at most one spike and need be reset at each time step. Furthermore, we develop an incremental mapping framework to demonstrate efficient network deployments on a reconfigurable neuromorphic chip. Experimental results show our spiking LeNet on MNIST and VGG-Net on CIFAR-10 datasetobtain 99.37% and 91.91% classification accuracy, respectively. Besides, the presented mapping algorithm manages network deployment on our neuromorphic chip with maximum resource efficiency and excellent flexibility. Our four-spike LeNet and VGG-Net on chip can achieve respective real-time inference speed of 0.38 ms/image, 3.24 ms/image, and an average power consumption of 0.28 mJ/image and 2.3 mJ/image at 0.9 V, 252 MHz, which is nearly two orders of magnitude more efficient than traditional GPUs.


2020 ◽  
Vol 92 (11) ◽  
pp. 1293-1302
Author(s):  
Adarsha Balaji ◽  
Thibaut Marty ◽  
Anup Das ◽  
Francky Catthoor

2021 ◽  
Vol 23 (6) ◽  
pp. 285-294
Author(s):  
N.V. Andreeva ◽  
◽  
V.V. Luchinin ◽  
E.A. Ryndin ◽  
M.G. Anchkov ◽  
...  

Memristive neuromorphic chips exploit a prospective class of novel functional materials (memristors) to deploy a new architecture of spiking neural networks for developing basic blocks of brain-like systems. Memristor-based neuromorphic hardware solutions for multi-agent systems are considered as challenges in frontier areas of chip design for fast and energy-efficient computing. As functional materials, metal oxide thin films with resistive switching and memory effects (memristive structures) are recognized as a potential elemental base for new components of neuromorphic engineering, enabling a combination of both data storage and processing in a single unit. A key design issue in this case is a hardware defined functionality of neural networks. The gradient change of resistive properties of memristive elements and its non-volatile memory behavior ensure the possibility of spiking neural network organization with unsupervised learning through hardware implementation of basic synaptic mechanisms, such as Hebb's learning rules including spike — timing dependent plasticity, long-term potentiation and depression. This paper provides an overview of scientific researches carrying out at Saint Petersburg Electrotechnical University "LETI" since 2014 in the field of novel electronic components for neuromorphic hardware solutions of brain-like chip design. Among the most promising concepts developed by ETU "LETI" are: the design of metal-insulator-metal structures exhibiting multilevel resistive switching (gradient tuning of resistive properties and bipolar resistive switching are combined together in a sin¬gle memristive element) for further use as artificial synaptic devices in neuromorphic chips; computing schemes for spatio-temporal pattern recognition based on spiking neural network architecture implementation; breadboard models of analogue circuits of hardware implementation of neuromorphic blocks for brain-like system developing.


Author(s):  
Adarsha Balaji ◽  
Francky Catthoor ◽  
Anup Das ◽  
Yuefeng Wu ◽  
Khanh Huynh ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document