scholarly journals Complex spike clusters and false‐positive rejection in a cerebellar supervised learning rule

2019 ◽  
Vol 597 (16) ◽  
pp. 4387-4406 ◽  
Author(s):  
Heather K. Titley ◽  
Mikhail Kislin ◽  
Dana H. Simmons ◽  
Samuel S.‐H. Wang ◽  
Christian Hansel
2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Falah Y. H. Ahmed ◽  
Siti Mariyam Shamsuddin ◽  
Siti Zaiton Mohd Hashim

A spiking neurons network encodes information in the timing of individual spike times. A novel supervised learning rule for SpikeProp is derived to overcome the discontinuities introduced by the spiking thresholding. This algorithm is based on an error-backpropagation learning rule suited for supervised learning of spiking neurons that use exact spike time coding. The SpikeProp is able to demonstrate the spiking neurons that can perform complex nonlinear classification in fast temporal coding. This study proposes enhancements of SpikeProp learning algorithm for supervised training of spiking networks which can deal with complex patterns. The proposed methods include the SpikeProp particle swarm optimization (PSO) and angle driven dependency learning rate. These methods are presented to SpikeProp network for multilayer learning enhancement and weights optimization. Input and output patterns are encoded as spike trains of precisely timed spikes, and the network learns to transform the input trains into target output trains. With these enhancements, our proposed methods outperformed other conventional neural network architectures.


1991 ◽  
Vol 3 (2) ◽  
pp. 201-212 ◽  
Author(s):  
Peter J. B. Hancock ◽  
Leslie S. Smith ◽  
William A. Phillips

We show that a form of synaptic plasticity recently discovered in slices of the rat visual cortex (Artola et al. 1990) can support an error-correcting learning rule. The rule increases weights when both pre- and postsynaptic units are highly active, and decreases them when pre-synaptic activity is high and postsynaptic activation is less than the threshold for weight increment but greater than a lower threshold. We show that this rule corrects false positive outputs in feedforward associative memory, that in an appropriate opponent-unit architecture it corrects misses, and that it performs better than the optimal Hebbian learning rule reported by Willshaw and Dayan (1990).


2013 ◽  
Vol 25 (6) ◽  
pp. 1472-1511 ◽  
Author(s):  
Yan Xu ◽  
Xiaoqin Zeng ◽  
Shuiming Zhong

The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.


2009 ◽  
Vol 21 (2) ◽  
pp. 340-352 ◽  
Author(s):  
Robert Urbanczik ◽  
Walter Senn

We introduce a new supervised learning rule for the tempotron task: the binary classification of input spike trains by an integrate-and-fire neuron that encodes its decision by firing or not firing. The rule is based on the gradient of a cost function, is found to have enhanced performance, and does not rely on a specific reset mechanism in the integrate-and-fire neuron.


2021 ◽  
Author(s):  
Pantelis Vafidis ◽  
David Owald ◽  
Tiziano D’Albis ◽  
Richard Kempter

SummaryRing attractor models for angular path integration have recently received strong experimental support. To function as integrators, head-direction (HD) circuits require precisely tuned connectivity, but it is currently unknown how such tuning could be achieved. Here, we propose a network model in which a local, biologically plausible learning rule adjusts synaptic efficacies during development, guided by supervisory allothetic cues. Applied to the Drosophila HD system, the model learns to path-integrate accurately and develops a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Our model predicts that path integration requires supervised learning during a developmental phase. The model setting is general and also applies to architectures that lack the physical topography of a ring, like the mammalian HD system.


2010 ◽  
Vol 22 (2) ◽  
pp. 467-510 ◽  
Author(s):  
Filip Ponulak ◽  
Andrzej Kasiński

Learning from instructions or demonstrations is a fundamental property of our brain necessary to acquire new knowledge and develop novel skills or behavioral patterns. This type of learning is thought to be involved in most of our daily routines. Although the concept of instruction-based learning has been studied for several decades, the exact neural mechanisms implementing this process remain unrevealed. One of the central questions in this regard is, How do neurons learn to reproduce template signals (instructions) encoded in precisely timed sequences of spikes? Here we present a model of supervised learning for biologically plausible neurons that addresses this question. In a set of experiments, we demonstrate that our approach enables us to train spiking neurons to reproduce arbitrary template spike patterns in response to given synaptic stimuli even in the presence of various sources of noise. We show that the learning rule can also be used for decision-making tasks. Neurons can be trained to classify categories of input signals based on only a temporal configuration of spikes. The decision is communicated by emitting precisely timed spike trains associated with given input categories. Trained neurons can perform the classification task correctly even if stimuli and corresponding decision times are temporally separated and the relevant information is consequently highly overlapped by the ongoing neural activity. Finally, we demonstrate that neurons can be trained to reproduce sequences of spikes with a controllable time shift with respect to target templates. A reproduced signal can follow or even precede the targets. This surprising result points out that spiking neurons can potentially be applied to forecast the behavior (firing times) of other reference neurons or networks.


Sign in / Sign up

Export Citation Format

Share Document