scholarly journals Does High Firing Irregularity Enhance Learning?

2011 ◽  
Vol 23 (3) ◽  
pp. 656-663 ◽  
Author(s):  
Chris Christodoulou ◽  
Aristodemos Cleanthous

In this note, we demonstrate that the high firing irregularity produced by the leaky integrate-and-fire neuron with the partial somatic reset mechanism, which has been shown to be the most likely candidate to reflect the mechanism used in the brain for reproducing the highly irregular cortical neuron firing at high rates (Bugmann, Christodoulou, & Taylor, 1997 ; Christodoulou & Bugmann, 2001 ), enhances learning. More specifically, it enhances reward-modulated spike-timing-dependent plasticity with eligibility trace when used in spiking neural networks, as shown by the results when tested in the simple benchmark problem of XOR, as well as in a complex multiagent setting task.

2018 ◽  
Vol 39 (4) ◽  
pp. 484-487 ◽  
Author(s):  
S. Lashkare ◽  
S. Chouhan ◽  
T. Chavan ◽  
A. Bhat ◽  
P. Kumbhare ◽  
...  

Author(s):  
Yu Qi ◽  
Jiangrong Shen ◽  
Yueming Wang ◽  
Huajin Tang ◽  
Hang Yu ◽  
...  

Spiking neural networks (SNNs) are considered to be biologically plausible and power-efficient on neuromorphic hardware. However, unlike the brain mechanisms, most existing SNN algorithms have fixed network topologies and connection relationships. This paper proposes a method to jointly learn network connections and link weights simultaneously. The connection structures are optimized by the spike-timing-dependent plasticity (STDP) rule with timing information, and the link weights are optimized by a supervised algorithm. The connection structures and the weights are learned alternately until a termination condition is satisfied. Experiments are carried out using four benchmark datasets. Our approach outperforms classical learning methods such as STDP, Tempotron, SpikeProp, and a state-of-the-art supervised algorithm. In addition, the learned structures effectively reduce the number of connections by about 24%, thus facilitate the computational efficiency of the network.


2012 ◽  
Vol 2012 ◽  
pp. 1-16 ◽  
Author(s):  
X. Zhang ◽  
G. Foderaro ◽  
C. Henriquez ◽  
A. M. J. VanDongen ◽  
S. Ferrari

This paper presents a deterministic and adaptive spike model derived from radial basis functions and a leaky integrate-and-fire sampler developed for training spiking neural networks without direct weight manipulation. Several algorithms have been proposed for training spiking neural networks through biologically-plausible learning mechanisms, such as spike-timing-dependent synaptic plasticity and Hebbian plasticity. These algorithms typically rely on the ability to update the synaptic strengths, or weights, directly, through a weight update rule in which the weight increment can be decided and implemented based on the training equations. However, in several potential applications of adaptive spiking neural networks, including neuroprosthetic devices and CMOS/memristor nanoscale neuromorphic chips, the weights cannot be manipulated directly and, instead, tend to change over time by virtue of the pre- and postsynaptic neural activity. This paper presents an indirect learning method that induces changes in the synaptic weights by modulating spike-timing-dependent plasticity by means of controlled input spike trains. In place of the weights, the algorithm manipulates the input spike trains used to stimulate the input neurons by determining a sequence of spike timings that minimize a desired objective function and, indirectly, induce the desired synaptic plasticity in the network.


Sign in / Sign up

Export Citation Format

Share Document