Spiking Neural Networks for Cortical Neuronal Spike Train Decoding

2010 ◽  
Vol 22 (4) ◽  
pp. 1060-1085 ◽  
Author(s):  
Huijuan Fang ◽  
Yongji Wang ◽  
Jiping He

Recent investigation of cortical coding and computation indicates that temporal coding is probably a more biologically plausible scheme used by neurons than the rate coding used commonly in most published work. We propose and demonstrate in this letter that spiking neural networks (SNN), consisting of spiking neurons that propagate information by the timing of spikes, are a better alternative to the coding scheme based on spike frequency (histogram) alone. The SNN model analyzes cortical neural spike trains directly without losing temporal information for generating more reliable motor command for cortically controlled prosthetics. In this letter, we compared the temporal pattern classification result from the SNN approach with results generated from firing-rate-based approaches: conventional artificial neural networks, support vector machines, and linear regression. The results show that the SNN algorithm can achieve higher classification accuracy and identify the spiking activity related to movement control earlier than the other methods. Both are desirable characteristics for fast neural information processing and reliable control command pattern recognition for neuroprosthetic applications.

Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1599
Author(s):  
Ali A. Al-Hamid ◽  
HyungWon Kim

Spiking neural networks (SNN) increasingly attract attention for their similarity to the biological neural system. Hardware implementation of spiking neural networks, however, remains a great challenge due to their excessive complexity and circuit size. This work introduces a novel optimization method for hardware friendly SNN architecture based on a modified rate coding scheme called Binary Streamed Rate Coding (BSRC). BSRC combines the features of both rate and temporal coding. In addition, by employing a built-in randomizer, the BSRC SNN model provides a higher accuracy and faster training. We also present SNN optimization methods including structure optimization and weight quantization. Extensive evaluations with MNIST SNNs demonstrate that the structure optimization of SNN (81-30-20-10) provides 183.19 times reduction in hardware compared with SNN (784-800-10), while providing an accuracy of 95.25%, a small loss compared with 98.89% and 98.93% reported in the previous works. Our weight quantization reduces 32-bit weights to 4-bit integers leading to further hardware reduction of 4 times with only 0.56% accuracy loss. Overall, the SNN model (81-30-20-10) optimized by our method shrinks the SNN’s circuit area from 3089.49 mm2 for SNN (784-800-10) to 4.04 mm2—a reduction of 765 times.


Author(s):  
Ansgar Rössig ◽  
Milena Petkovic

Abstract We consider the problem of verifying linear properties of neural networks. Despite their success in many classification and prediction tasks, neural networks may return unexpected results for certain inputs. This is highly problematic with respect to the application of neural networks for safety-critical tasks, e.g. in autonomous driving. We provide an overview of algorithmic approaches that aim to provide formal guarantees on the behaviour of neural networks. Moreover, we present new theoretical results with respect to the approximation of ReLU neural networks. On the other hand, we implement a solver for verification of ReLU neural networks which combines mixed integer programming with specialized branching and approximation techniques. To evaluate its performance, we conduct an extensive computational study. For that we use test instances based on the ACAS Xu system and the MNIST handwritten digit data set. The results indicate that our approach is very competitive with others, i.e. it outperforms the solvers of Bunel et al. (in: Bengio, Wallach, Larochelle, Grauman, Cesa-Bianchi, Garnett (eds) Advances in neural information processing systems (NIPS 2018), 2018) and Reluplex (Katz et al. in: Computer aided verification—29th international conference, CAV 2017, Heidelberg, Germany, July 24–28, 2017, Proceedings, 2017). In comparison to the solvers ReluVal (Wang et al. in: 27th USENIX security symposium (USENIX Security 18), USENIX Association, Baltimore, 2018a) and Neurify (Wang et al. in: 32nd Conference on neural information processing systems (NIPS), Montreal, 2018b), the number of necessary branchings is much smaller. Our solver is publicly available and able to solve the verification problem for instances which do not have independent bounds for each input neuron.


Author(s):  
Lei Zhang ◽  
Shengyuan Zhou ◽  
Tian Zhi ◽  
Zidong Du ◽  
Yunji Chen

Continuous-valued deep convolutional networks (DNNs) can be converted into accurate rate-coding based spike neural networks (SNNs). However, the substantial computational and energy costs, which is caused by multiple spikes, limit their use in mobile and embedded applications. And recent works have shown that the newly emerged temporal-coding based SNNs converted from DNNs can reduce the computational load effectively. In this paper, we propose a novel method to convert DNNs to temporal-coding SNNs, called TDSNN. Combined with the characteristic of the leaky integrate-andfire (LIF) neural model, we put forward a new coding principle Reverse Coding and design a novel Ticking Neuron mechanism. According to our evaluation, our proposed method achieves 42% total operations reduction on average in large networks comparing with DNNs with no more than 0.5% accuracy loss. The evaluation shows that TDSNN may prove to be one of the key enablers to make the adoption of SNNs widespread.


Athenea ◽  
2021 ◽  
Vol 2 (5) ◽  
pp. 29-34
Author(s):  
Alexander Caicedo ◽  
Anthony Caicedo

The era of the technological revolution increasingly encourages the development of technologies that facilitate in one way or another people's daily activities, thus generating a great advance in information processing. The purpose of this work is to implement a neural network that allows classifying the emotional states of a person based on the different human gestures. A database is used with information on students from the PUCE-E School of Computer Science and Engineering. Said information are images that express the gestures of the students and with which the comparative analysis with the input data is carried out. The environment in which this work converges proposes that the implementation of this project be carried out under the programming of a multilayer neuralnetwork. Multilayer feeding neural networks possess a number of properties that make them particularly suitable for complex pattern classification problems [8]. Back-Propagation [4], which is a backpropagation algorithm used in the Feedforward neural network, was taken into consideration to solve the classification of emotions. Keywords: Image processing, neural networks, gestures, back-propagation, feedforward, classification, emotions. References [1]S. Gangwar, S. Shukla, D. Arora. “Human Emotion Recognition by Using Pattern Recognition Network”, Journal of Engineering Research and Applications, Vol. 3, Issue 5, pp.535-539, 2013. [2]K. Rohit. “Back Propagation Neural Network based Emotion Recognition System”. International Journal of Engineering Trends and Technology (IJETT), Vol. 22, Nº 4, 2015. [3]S. Eishu, K. Ranju, S. Malika, “Speech Emotion Recognition using BFO and BPNN”, International Journal of Advances in Science and Technology (IJAST), ISSN2348-5426, Vol. 2 Issue 3, 2014. [4]A. Fiszelew, R. García-Martínez and T. de Buenos Aires. “Generación automática de redes neuronales con ajuste de parámetros basado en algoritmos genéticos”. Revista del Instituto Tecnológico de Buenos Aires, 26, 76-101, 2002. [5]Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. “Handwritten digit recognition with a back-propagation network”. In Advances in neural information processing systems. pp. 396-404, 1990. [6]G. Bebis and M. Georgiopoulos. “Feed-forward neural networks”. IEEE Potentials, 13(4), 27-31, 1994. [7]G. Huang, Q. Zhu and C. Siew. “Extreme learning machine: a new learning scheme of feedforward neural networks”. In Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference. Vol. 2, pp. 985-990. IEEE, 2004. [8]D. Montana and L. Davis. “Training Feedforward Neural Networks Using Genetic Algorithms”. In IJCAI, Vol. 89, pp. 762-767, 1989. [9]I. Sutskever, O. Vinyals and Q. Le. “Sequence to sequence learning with neural networks”. In Advances in neural information processing systems. pp. 3104-3112, 2014. [10]J. Schmidhuber. “Deep learning in neural networks: An overview”. Neural networks, 61, 85-117, 2015. [11]R. Santos, M. Ruppb, S. Bonzi and A. Filetia, “Comparación entre redes neuronales feedforward de múltiples capas y una red de función radial para detectar y localizar fugas en tuberías que transportan gas”. Chem. Ing.Trans , 32 (1375), e1380, 2013.


Sign in / Sign up

Export Citation Format

Share Document