scholarly journals Pattern Recognition of Spiking Neural Networks Based on Visual Mechanism and Supervised Synaptic Learning

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xiumin Li ◽  
Hao Yi ◽  
Shengyuan Luo

Electrophysiological studies have shown that mammalian primary visual cortex are selective for the orientations of visual stimuli. Inspired by this mechanism, we propose a hierarchical spiking neural network (SNN) for image classification. Grayscale input images are fed through a feed-forward network consisting of orientation-selective neurons, which then projected to a layer of downstream classifier neurons through the spiking-based supervised tempotron learning rule. Based on the orientation-selective mechanism of the visual cortex and tempotron learning rule, the network can effectively classify images of the extensively studied MNIST database of handwritten digits, which achieves 96 % classification accuracy based on only 2000 training samples (traditional training set is 60000 ). Compared with other classification methods, our model not only guarantees the biological plausibility and the accuracy of image classification but also significantly reduces the needed training samples. Considering the fact that the most commonly used deep learning neural networks need big data samples and high power consumption in image recognition, this brain-inspired computational neural network model based on the layer-by-layer hierarchical image processing mechanism of the visual cortex may provide a basis for the wide application of spiking neural networks in the field of intelligent computing.

2019 ◽  
Author(s):  
Faramarz Faghihi ◽  
Hossein Molhem ◽  
Ahmed A. Moustafa

AbstractConventional deep neural networks capture essential information processing stages in perception. Deep neural networks often require very large volume of training examples, whereas children can learn concepts such as hand-written digits with few examples. The goal of this project is to develop a deep spiking neural network that can learn from few training trials. Using known neuronal mechanisms, a spiking neural network model is developed and trained to recognize hand-written digits with presenting one to four training examples for each digit taken from the MNIST database. The model detects and learns geometric features of the images from MNIST database. In this work, a novel biological back-propagation based learning rule is developed and used to a train the network to detect basic features of different digits. For this purpose, randomly initialized synaptic weights between the layers are being updated. By using a neuroscience inspired mechanism named ‘synaptic pruning’ and a predefined threshold, some of the synapses through the training are deleted. Hence, information channels are constructed that are highly specific for each digit as matrix of synaptic connections between two layers of spiking neural networks. These connection matrixes named ‘information channels’ are used in the test phase to assign a digit class to each test image. As similar to humans’ abilities to learn from small training trials, the developed spiking neural network needs a very small dataset for training, compared to conventional deep learning methods checked on MNIST dataset.


Author(s):  
Xiumin Li ◽  
Qing Chen ◽  
Fangzheng Xue

In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue ‘Mathematical methods in medicine: neuroscience, cardiology and pathology’.


Author(s):  
Asma Elyounsi ◽  
Hatem Tlijani ◽  
Mohamed Salim Bouhlel

Traditional neural networks are very diverse and have been used during the last decades in the fields of data classification. These networks like MLP, back propagation neural networks (BPNN) and feed forward network have shown inability to scale with problem size and with the slow convergence rate. So in order to overcome these numbers of drawbacks, the use of higher order neural networks (HONNs) becomes the solution by adding input units along with a stronger functioning of other neural units in the network and transforms easily these input units to hidden layers. In this paper, a new metaheuristic method, Firefly (FFA), is applied to calculate the optimal weights of the Functional Link Artificial Neural Network (FLANN) by using the flashing behavior of fireflies in order to classify ISA-Radar target. The average classification result of FLANN-FFA which reached 96% shows the efficiency of the process compared to other tested methods.


2021 ◽  
Vol 23 (6) ◽  
pp. 317-326
Author(s):  
E.A. Ryndin ◽  
◽  
N.V. Andreeva ◽  
V.V. Luchinin ◽  
K.S. Goncharov ◽  
...  

In the current era, design and development of artificial neural networks exploiting the architecture of the human brain have evolved rapidly. Artificial neural networks effectively solve a wide range of common for artificial intelligence tasks involving data classification and recognition, prediction, forecasting and adaptive control of object behavior. Biologically inspired underlying principles of ANN operation have certain advantages over the conventional von Neumann architecture including unsupervised learning, architectural flexibility and adaptability to environmental change and high performance under significantly reduced power consumption due to heavy parallel and asynchronous data processing. In this paper, we present the circuit design of main functional blocks (neurons and synapses) intended for hardware implementation of a perceptron-based feedforward spiking neural network. As the third generation of artificial neural networks, spiking neural networks perform data processing utilizing spikes, which are discrete events (or functions) that take place at points in time. Neurons in spiking neural networks initiate precisely timing spikes and communicate with each other via spikes transmitted through synaptic connections or synapses with adaptable scalable weight. One of the prospective approach to emulate the synaptic behavior in hardware implemented spiking neural networks is to use non-volatile memory devices with analog conduction modulation (or memristive structures). Here we propose a circuit design for functional analogues of memristive structure to mimic a synaptic plasticity, pre- and postsynaptic neurons which could be used for developing circuit design of spiking neural network architectures with different training algorithms including spike-timing dependent plasticity learning rule. Two different circuits of electronic synapse were developed. The first one is an analog synapse with photoresistive optocoupler used to ensure the tunable conductivity for synaptic plasticity emulation. While the second one is a digital synapse, in which the synaptic weight is stored in a digital code with its direct conversion into conductivity (without digital-to-analog converter andphotoresistive optocoupler). The results of the prototyping of developed circuits for electronic analogues of synapses, pre- and postsynaptic neurons and the study of transient processes are presented. The developed approach could provide a basis for ASIC design of spiking neural networks based on CMOS (complementary metal oxide semiconductor) design technology.


Author(s):  
T.K. Biryukova

Classic neural networks suppose trainable parameters to include just weights of neurons. This paper proposes parabolic integrodifferential splines (ID-splines), developed by author, as a new kind of activation function (AF) for neural networks, where ID-splines coefficients are also trainable parameters. Parameters of ID-spline AF together with weights of neurons are vary during the training in order to minimize the loss function thus reducing the training time and increasing the operation speed of the neural network. The newly developed algorithm enables software implementation of the ID-spline AF as a tool for neural networks construction, training and operation. It is proposed to use the same ID-spline AF for neurons in the same layer, but different for different layers. In this case, the parameters of the ID-spline AF for a particular layer change during the training process independently of the activation functions (AFs) of other network layers. In order to comply with the continuity condition for the derivative of the parabolic ID-spline on the interval (x x0, n) , its parameters fi (i= 0,...,n) should be calculated using the tridiagonal system of linear algebraic equations: To solve the system it is necessary to use two more equations arising from the boundary conditions for specific problems. For exam- ple the values of the grid function (if they are known) in the points (x x0, n) may be used for solving the system above: f f x0 = ( 0) , f f xn = ( n) . The parameters Iii+1 (i= 0,...,n−1 ) are used as trainable parameters of neural networks. The grid boundaries and spacing of the nodes of ID-spline AF are best chosen experimentally. The optimal selection of grid nodes allows improving the quality of results produced by the neural network. The formula for a parabolic ID-spline is such that the complexity of the calculations does not depend on whether the grid of nodes is uniform or non-uniform. An experimental comparison of the results of image classification from the popular FashionMNIST dataset by convolutional neural 0, x< 0 networks with the ID-spline AFs and the well-known ReLUx( ) =AF was carried out. The results reveal that the usage x x, ≥ 0 of the ID-spline AFs provides better accuracy of neural network operation than the ReLU AF. The training time for two convolutional layers network with two ID-spline AFs is just about 2 times longer than with two instances of ReLU AF. Doubling of the training time due to complexity of the ID-spline formula is the acceptable price for significantly better accuracy of the network. Wherein the difference of an operation speed of the networks with ID-spline and ReLU AFs will be negligible. The use of trainable ID-spline AFs makes it possible to simplify the architecture of neural networks without losing their efficiency. The modification of the well-known neural networks (ResNet etc.) by replacing traditional AFs with ID-spline AFs is a promising approach to increase the neural network operation accuracy. In a majority of cases, such a substitution does not require to train the network from scratch because it allows to use pre-trained on large datasets neuron weights supplied by standard software libraries for neural network construction thus substantially shortening training time.


This chapter presents an introductory overview of the application of computational intelligence in biometrics. Starting with the historical background on artificial intelligence, the chapter proceeds to the evolutionary computing and neural networks. Evolutionary computing is an ability of a computer system to learn and evolve over time in a manner similar to humans. The chapter discusses swarm intelligence, which is an example of evolutionary computing, as well as chaotic neural network, which is another aspect of intelligent computing. At the end, special concentration is given to a particular application of computational intelligence—biometric security.


2021 ◽  
Author(s):  
Ceca Kraišniković ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware – neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


2020 ◽  
Vol 34 (02) ◽  
pp. 1316-1323
Author(s):  
Zuozhu Liu ◽  
Thiparat Chotibut ◽  
Christopher Hillar ◽  
Shaowei Lin

Motivated by the celebrated discrete-time model of nervous activity outlined by McCulloch and Pitts in 1943, we propose a novel continuous-time model, the McCulloch-Pitts network (MPN), for sequence learning in spiking neural networks. Our model has a local learning rule, such that the synaptic weight updates depend only on the information directly accessible by the synapse. By exploiting asymmetry in the connections between binary neurons, we show that MPN can be trained to robustly memorize multiple spatiotemporal patterns of binary vectors, generalizing the ability of the symmetric Hopfield network to memorize static spatial patterns. In addition, we demonstrate that the model can efficiently learn sequences of binary pictures as well as generative models for experimental neural spike-train data. Our learning rule is consistent with spike-timing-dependent plasticity (STDP), thus providing a theoretical ground for the systematic design of biologically inspired networks with large and robust long-range sequence storage capacity.


Sign in / Sign up

Export Citation Format

Share Document