scholarly journals Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier

Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 500 ◽  
Author(s):  
Sergey A. Lobov ◽  
Andrey V. Chernyshov ◽  
Nadia P. Krilova ◽  
Maxim O. Shamshin ◽  
Victor B. Kazantsev

One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.

2010 ◽  
Vol 22 (8) ◽  
pp. 2059-2085 ◽  
Author(s):  
Daniel Bush ◽  
Andrew Philippides ◽  
Phil Husbands ◽  
Michael O'Shea

Rate-coded Hebbian learning, as characterized by the BCM formulation, is an established computational model of synaptic plasticity. Recently it has been demonstrated that changes in the strength of synapses in vivo can also depend explicitly on the relative timing of pre- and postsynaptic firing. Computational modeling of this spike-timing-dependent plasticity (STDP) has demonstrated that it can provide inherent stability or competition based on local synaptic variables. However, it has also been demonstrated that these properties rely on synaptic weights being either depressed or unchanged by an increase in mean stochastic firing rates, which directly contradicts empirical data. Several analytical studies have addressed this apparent dichotomy and identified conditions under which distinct and disparate STDP rules can be reconciled with rate-coded Hebbian learning. The aim of this research is to verify, unify, and expand on these previous findings by manipulating each element of a standard computational STDP model in turn. This allows us to identify the conditions under which this plasticity rule can replicate experimental data obtained using both rate and temporal stimulation protocols in a spiking recurrent neural network. Our results describe how the relative scale of mean synaptic weights and their dependence on stochastic pre- or postsynaptic firing rates can be manipulated by adjusting the exact profile of the asymmetric learning window and temporal restrictions on spike pair interactions respectively. These findings imply that previously disparate models of rate-coded autoassociative learning and temporally coded heteroassociative learning, mediated by symmetric and asymmetric connections respectively, can be implemented in a single network using a single plasticity rule. However, we also demonstrate that forms of STDP that can be reconciled with rate-coded Hebbian learning do not generate inherent synaptic competition, and thus some additional mechanism is required to guarantee long-term input-output selectivity.


2003 ◽  
Vol 15 (1) ◽  
pp. 103-125 ◽  
Author(s):  
Naoki Masuda ◽  
Kazuyuki Aihara

A functional role for precise spike timing has been proposed as an alternative hypothesis to rate coding. We show in this article that both the synchronous firing code and the population rate code can be used dually in a common framework of a single neural network model. Furthermore, these two coding mechanisms are bridged continuously by several modulatable model parameters, including shared connectivity, feedback strength, membrane leak rate, and neuron heterogeneity. The rates of change of these parameters are closely related to the response time and the timescale of learning.


2016 ◽  
Author(s):  
Bryan C. Souza ◽  
Adriano B. L. Tort

Hippocampal place cells convey spatial information through spike frequency (“rate coding”) and spike timing relative to the theta phase (“temporal coding”). Whether rate and temporal coding are due to independent or related mechanisms has been the subject of wide debate. Here we show that the spike timing of place cells couples to theta phase before major increases in firing rate, anticipating the animal’s entrance into the classical, rate-based place field. In contrast, spikes rapidly decouple from theta as the animal leaves the place field and firing rate decreases. Therefore, temporal coding has strong asymmetry around the place field center. We further show that the dynamics of temporal coding along space evolves in three stages: phase coupling, phase precession and phase decoupling. These results suggest that place cells represent more future than past locations through their spike timing and that independent mechanisms govern rate and temporal coding.


2021 ◽  
Author(s):  
Sebastian H. Bitzenhofer ◽  
Elena A. Westeinde ◽  
Han-Xiong Bear Zhang ◽  
Jeffry S. Isaacson

SummaryOlfactory information is encoded in lateral entorhinal cortex (LEC) by two classes of layer 2 (L2) principal neurons: fan and pyramidal cells. However, the functional properties of L2 neurons are unclear. Here, we show in awake mice that L2 cells respond rapidly to odors during single sniffs and that LEC is essential for discrimination of odor identity and intensity. Population analyses of L2 ensembles reveals that while rate coding distinguishes odor identity, firing rates are weakly concentration-dependent and changes in spike timing represent odor intensity. L2 principal cells differ in afferent olfactory input and connectivity with local inhibitory circuits and the relative timing of pyramidal and fan cell spikes underlies odor intensity coding. Downstream, intensity is encoded purely by spike timing in hippocampal CA1. Together, these results reveal the unique processing of odor information by parallel LEC subcircuits and highlight the importance of temporal coding in higher olfactory areas.


2004 ◽  
Vol 7 (1) ◽  
pp. 35-36 ◽  
Author(s):  
BRIAN MACWHINNEY

Truscott and Sharwood Smith (henceforth T&SS) attempt to show how second language acquisition can occur without any learning. In their APT model, change depends only on the tuning of innate principles through the normal course of processing of L2. There are some features of their model that I find attractive. Specifically, their acceptance of the concepts of competition and activation strength brings them in line with standard processing accounts like the Competition Model (Bates and MacWhinney, 1982; MacWhinney, 1987, in press). At the same time, their reliance on parameters as the core constructs guiding learning leaves this model squarely within the framework of Chomsky's theory of Principles and Parameters (P&P). As such, it stipulates that the specific functional categories of Universal Grammar serve as the fundamental guide to both first and second language acquisition. Like other accounts in the P&P framework, this model attempts to view second language acquisition as involving no real learning beyond the deductive process of parameter-setting based on the detection of certain triggers. The specific innovation of the APT model is that changes in activation strength during processing function as the trigger to the setting of parameters. Unlike other P&P models, APT does not set parameters in an absolute fashion, allowing their activation weight to change by the processing of new input over time. The use of the concept of activation in APT is far more restricted than its use in connectionist models that allow for Hebbian learning, self-organizing features maps, or back-propagation.


2018 ◽  
Vol 32 (01) ◽  
pp. 1750274 ◽  
Author(s):  
Ying-Mei Qin ◽  
Cong Men ◽  
Jia Zhao ◽  
Chun-Xiao Han ◽  
Yan-Qiu Che

We focus on the role of heterogeneity on the propagation of firing patterns in feedforward network (FFN). Effects of heterogeneities both in parameters of neuronal excitability and synaptic delays are investigated systematically. Neuronal heterogeneity is found to modulate firing rates and spiking regularity by changing the excitability of the network. Synaptic delays are strongly related with desynchronized and synchronized firing patterns of the FFN, which indicate that synaptic delays may play a significant role in bridging rate coding and temporal coding. Furthermore, quasi-coherence resonance (quasi-CR) phenomenon is observed in the parameter domain of connection probability and delay-heterogeneity. All these phenomena above enable a detailed characterization of neuronal heterogeneity in FFN, which may play an indispensable role in reproducing the important properties of in vivo experiments.


2020 ◽  
Author(s):  
Matthias Loidolt ◽  
Lucas Rudelt ◽  
Viola Priesemann

AbstractHow does spontaneous activity during development prepare cortico-cortical connections for sensory input? We here analyse the development of sequence memory, an intrinsic feature of recurrent networks that supports temporal perception. We use a recurrent neural network model with homeostatic and spike-timing-dependent plasticity (STDP). This model has been shown to learn specific sequences from structured input. We show that development even under unstructured input increases unspecific sequence memory. Moreover, networks “pre-shaped” by such unstructured input subsequently learn specific sequences faster. The key structural substrate is the emergence of strong and directed synapses due to STDP and synaptic competition. These construct self-amplifying preferential paths of activity, which can quickly encode new input sequences. Our results suggest that memory traces are not printed on a tabula rasa, but instead harness building blocks already present in the brain.


Author(s):  
RONALD H. SILVERMAN

Neural networks differ from traditional approaches to image processing in terms of their ability to adapt to regularities in image structure and to self-organize so as to implement directed transformations. Biomedical ultrasonic images are often degraded in quality by noise and other factors, making enhancement techniques particularly important. This paper describes use of back propagation and competitive learning for enhancement and segmentation of ultrasonic images of the eye. Of particular interest is the extension or these technique to segmentation of three-dimensional data sets, where simple thresholding and gradient operations are not entirely successful.


Sign in / Sign up

Export Citation Format

Share Document