Compact Hardware Synthesis of Stochastic Spiking Neural Networks

2019 ◽  
Vol 29 (08) ◽  
pp. 1950004 ◽  
Author(s):  
Fabio Galán-Prado ◽  
Alejandro Morán ◽  
Joan Font ◽  
Miquel Roca ◽  
Josep L. Rosselló

Spiking neural networks (SNN) are able to emulate real neural behavior with high confidence due to their bio-inspired nature. Many designs have been proposed for the implementation of SNN in hardware, although the realization of high-density and biologically-inspired SNN is currently a complex challenge of high scientific and technical interest. In this work, we propose a compact digital design for the implementation of high-volume SNN that considers the intrinsic stochastic processes present in biological neurons and enables high-density hardware implementation. The proposed stochastic SNN model (SSNN) is compared with previous SSNN models, achieving a higher processing speed. We also show how the proposed model can be scaled to high-volume neural networks trained by using back propagation and applied to a pattern classification task. The proposed model achieves better results compared with other recently-published SNN models configured with unsupervised STDP learning.

Author(s):  
Daniel Auge ◽  
Julian Hille ◽  
Etienne Mueller ◽  
Alois Knoll

AbstractBiologically inspired spiking neural networks are increasingly popular in the field of artificial intelligence due to their ability to solve complex problems while being power efficient. They do so by leveraging the timing of discrete spikes as main information carrier. Though, industrial applications are still lacking, partially because the question of how to encode incoming data into discrete spike events cannot be uniformly answered. In this paper, we summarise the signal encoding schemes presented in the literature and propose a uniform nomenclature to prevent the vague usage of ambiguous definitions. Therefore we survey both, the theoretical foundations as well as applications of the encoding schemes. This work provides a foundation in spiking signal encoding and gives an overview over different application-oriented implementations which utilise the schemes.


2021 ◽  
Vol 18 (2) ◽  
pp. 40-55
Author(s):  
Lídio Mauro Lima Campos ◽  
◽  
Jherson Haryson Almeida Pereira ◽  
Danilo Souza Duarte ◽  
Roberto Célio Limão Oliveira ◽  
...  

The aim of this paper is to introduce a biologically inspired approach that can automatically generate Deep Neural networks with good prediction capacity, smaller error and large tolerance to noises. In order to do this, three biological paradigms were used: Genetic Algorithm (GA), Lindenmayer System and Neural Networks (DNNs). The final sections of the paper present some experiments aimed at investigating the possibilities of the method in the forecast the price of energy in the Brazilian market. The proposed model considers a multi-step ahead price prediction (12, 24, and 36 weeks ahead). The results for MLP and LSTM networks show a good ability to predict peaks and satisfactory accuracy according to error measures comparing with other methods.


Complexus ◽  
2006 ◽  
Vol 3 (1-3) ◽  
pp. 32-47 ◽  
Author(s):  
J.Manuel Moreno ◽  
Yann Thoma ◽  
Eduardo Sanchez ◽  
Jan Eriksson ◽  
Javier Iglesias ◽  
...  

Webology ◽  
2021 ◽  
Vol 19 (1) ◽  
pp. 01-18
Author(s):  
Hayder Rahm Dakheel AL-Fayyadh ◽  
Salam Abdulabbas Ganim Ali ◽  
Dr. Basim Abood

The goal of this paper is to use artificial intelligence to build and evaluate an adaptive learning system where we adopt the basic approaches of spiking neural networks as well as artificial neural networks. Spiking neural networks receive increasing attention due to their advantages over traditional artificial neural networks. They have proven to be energy efficient, biological plausible, and up to 105 times faster if they are simulated on analogue traditional learning systems. Artificial neural network libraries use computational graphs as a pervasive representation, however, spiking models remain heterogeneous and difficult to train. Using the artificial intelligence deductive method, the paper posits two hypotheses that examines whether 1) there exists a common representation for both neural networks paradigms for tutorial mentoring, and whether 2) spiking and non-spiking models can learn a simple recognition task for learning activities for adaptive learning. The first hypothesis is confirmed by specifying and implementing a domain-specific language that generates semantically similar spiking and non-spiking neural networks for tutorial mentoring. Through three classification experiments, the second hypothesis is shown to hold for non-spiking models, but cannot be proven for the spiking models. The paper contributes three findings: 1) a domain-specific language for modelling neural network topologies in adaptive tutorial mentoring for students, 2) a preliminary model for generalizable learning through back-propagation in spiking neural networks for learning activities for students also represented in results section, and 3) a method for transferring optimised non-spiking parameters to spiking neural networks has also been developed for adaptive learning system. The latter contribution is promising because the vast machine learning literature can spill-over to the emerging field of spiking neural networks and adaptive learning computing. Future work includes improving the back-propagation model, exploring time-dependent models for learning, and adding support for adaptive learning systems.


Author(s):  
Tielin Zhang ◽  
Yi Zeng ◽  
Dongcheng Zhao ◽  
Bo Xu

Due to the nature of Spiking Neural Networks (SNNs), it is challenging to be trained by biologically plausible learning principles. The multi-layered SNNs are with non-differential neurons, temporary-centric synapses, which make them nearly impossible to be directly tuned by back propagation. Here we propose an alternative biological inspired balanced tuning approach to train SNNs. The approach contains three main inspirations from the brain: Firstly, the biological network will usually be trained towards the state where the temporal update of variables are equilibrium (e.g. membrane potential); Secondly, specific proportions of excitatory and inhibitory neurons usually contribute to stable representations; Thirdly, the short-term plasticity (STP) is a general principle to keep the input and output of synapses balanced towards a better learning convergence. With these inspirations, we train SNNs with three steps: Firstly, the SNN model is trained with three brain-inspired principles; then weakly supervised learning is used to tune the membrane potential in the final layer for network classification; finally the learned information is consolidated from membrane potential into the weights of synapses by Spike-Timing Dependent Plasticity (STDP). The proposed approach is verified on the MNIST hand-written digit recognition dataset and the performance (the accuracy of 98.64%) indicates that the ideas of balancing state could indeed improve the learning ability of SNNs, which shows the power of proposed brain-inspired approach on the tuning of biological plausible SNNs.


Sign in / Sign up

Export Citation Format

Share Document