Classification of Correlated Patterns with a Configurable Analog VLSI Neural Network of Spiking Neurons and Self-Regulating Plastic Synapses

2009 ◽  
Vol 21 (11) ◽  
pp. 3106-3129 ◽  
Author(s):  
Massimilian Giulioni ◽  
Mario Pannunzi ◽  
Davide Badoni ◽  
Vittorio Dante ◽  
Paolo Del Giudice

We describe the implementation and illustrate the learning performance of an analog VLSI network of 32 integrate-and-fire neurons with spike-frequency adaptation and 2016 Hebbian bistable spike-driven stochastic synapses, endowed with a self-regulating plasticity mechanism, which avoids unnecessary synaptic changes. The synaptic matrix can be flexibly configured and provides both recurrent and external connectivity with address-event representation compliant devices. We demonstrate a marked improvement in the efficiency of the network in classifying correlated patterns, owing to the self-regulating mechanism.

Author(s):  
Mario Antoine Aoun ◽  
Mounir Boukadoum

The authors implement a Liquid State Machine composed from a pool of chaotic spiking neurons. Furthermore, a synaptic plasticity mechanism operates on the connection weights between the neurons inside the pool. A special feature of the system's classification capability is that it can learn the class of a set of time varying inputs when trained from positive examples only, thus, it is a one class classifier. To demonstrate the applicability of this novel neurocomputing architecture, the authors apply it for Online Signature Verification.


2005 ◽  
Vol 17 (10) ◽  
pp. 2139-2175 ◽  
Author(s):  
Naoki Masuda ◽  
Brent Doiron ◽  
André Longtin ◽  
Kazuyuki Aihara

Oscillatory and synchronized neural activities are commonly found in the brain, and evidence suggests that many of them are caused by global feedback. Their mechanisms and roles in information processing have been discussed often using purely feedforward networks or recurrent networks with constant inputs. On the other hand, real recurrent neural networks are abundant and continually receive information-rich inputs from the outside environment or other parts of the brain. We examine how feedforward networks of spiking neurons with delayed global feedback process information about temporally changing inputs. We show that the network behavior is more synchronous as well as more correlated with and phase-locked to the stimulus when the stimulus frequency is resonant with the inherent frequency of the neuron or that of the network oscillation generated by the feedback architecture. The two eigenmodes have distinct dynamical characteristics, which are supported by numerical simulations and by analytical arguments based on frequency response and bifurcation theory. This distinction is similar to the class I versus class II classification of single neurons according to the bifurcation from quiescence to periodic firing, and the two modes depend differently on system parameters. These two mechanisms may be associated with different types of information processing.


2011 ◽  
Vol 106 (1) ◽  
pp. 361-373 ◽  
Author(s):  
Srdjan Ostojic

Interspike interval (ISI) distributions of cortical neurons exhibit a range of different shapes. Wide ISI distributions are believed to stem from a balance of excitatory and inhibitory inputs that leads to a strongly fluctuating total drive. An important question is whether the full range of experimentally observed ISI distributions can be reproduced by modulating this balance. To address this issue, we investigate the shape of the ISI distributions of spiking neuron models receiving fluctuating inputs. Using analytical tools to describe the ISI distribution of a leaky integrate-and-fire (LIF) neuron, we identify three key features: 1) the ISI distribution displays an exponential decay at long ISIs independently of the strength of the fluctuating input; 2) as the amplitude of the input fluctuations is increased, the ISI distribution evolves progressively between three types, a narrow distribution (suprathreshold input), an exponential with an effective refractory period (subthreshold but suprareset input), and a bursting exponential (subreset input); 3) the shape of the ISI distribution is approximately independent of the mean ISI and determined only by the coefficient of variation. Numerical simulations show that these features are not specific to the LIF model but are also present in the ISI distributions of the exponential integrate-and-fire model and a Hodgkin-Huxley-like model. Moreover, we observe that for a fixed mean and coefficient of variation of ISIs, the full ISI distributions of the three models are nearly identical. We conclude that the ISI distributions of spiking neurons in the presence of fluctuating inputs are well described by gamma distributions.


2017 ◽  
Vol 27 (03) ◽  
pp. 1750002 ◽  
Author(s):  
Lilin Guo ◽  
Zhenzhong Wang ◽  
Mercedes Cabrerizo ◽  
Malek Adjouadi

This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.


2010 ◽  
Vol 22 (1) ◽  
pp. 273-288 ◽  
Author(s):  
Florian Landis ◽  
Thomas Ott ◽  
Ruedi Stoop

We propose a Hebbian learning-based data clustering algorithm using spiking neurons. The algorithm is capable of distinguishing between clusters and noisy background data and finds an arbitrary number of clusters of arbitrary shape. These properties render the approach particularly useful for visual scene segmentation into arbitrarily shaped homogeneous regions. We present several application examples, and in order to highlight the advantages and the weaknesses of our method, we systematically compare the results with those from standard methods such as the k-means and Ward's linkage clustering. The analysis demonstrates that not only the clustering ability of the proposed algorithm is more powerful than those of the two concurrent methods, the time complexity of the method is also more modest than that of its generally used strongest competitor.


2009 ◽  
Vol 21 (2) ◽  
pp. 340-352 ◽  
Author(s):  
Robert Urbanczik ◽  
Walter Senn

We introduce a new supervised learning rule for the tempotron task: the binary classification of input spike trains by an integrate-and-fire neuron that encodes its decision by firing or not firing. The rule is based on the gradient of a cost function, is found to have enhanced performance, and does not rely on a specific reset mechanism in the integrate-and-fire neuron.


2020 ◽  
Vol 10 (23) ◽  
pp. 8481
Author(s):  
Cesar Federico Caiafa ◽  
Jordi Solé-Casals ◽  
Pere Marti-Puig ◽  
Sun Zhe ◽  
Toshihisa Tanaka

In many machine learning applications, measurements are sometimes incomplete or noisy resulting in missing features. In other cases, and for different reasons, the datasets are originally small, and therefore, more data samples are required to derive useful supervised or unsupervised classification methods. Correct handling of incomplete, noisy or small datasets in machine learning is a fundamental and classic challenge. In this article, we provide a unified review of recently proposed methods based on signal decomposition for missing features imputation (data completion), classification of noisy samples and artificial generation of new data samples (data augmentation). We illustrate the application of these signal decomposition methods in diverse selected practical machine learning examples including: brain computer interface, epileptic intracranial electroencephalogram signals classification, face recognition/verification and water networks data analysis. We show that a signal decomposition approach can provide valuable tools to improve machine learning performance with low quality datasets.


Sign in / Sign up

Export Citation Format

Share Document