scholarly journals An Online Structural Plasticity Rule for Generating Better Reservoirs

2016 ◽  
Vol 28 (11) ◽  
pp. 2557-2584 ◽  
Author(s):  
Subhrajit Roy ◽  
Arindam Basu

In this letter, we propose a novel neuro-inspired low-resolution online unsupervised learning rule to train the reservoir or liquid of liquid state machines. The liquid is a sparsely interconnected huge recurrent network of spiking neurons. The proposed learning rule is inspired from structural plasticity and trains the liquid through formating and eliminating synaptic connections. Hence, the learning involves rewiring of the reservoir connections similar to structural plasticity observed in biological neural networks. The network connections can be stored as a connection matrix and updated in memory by using address event representation (AER) protocols, which are generally employed in neuromorphic systems. On investigating the pairwise separation property, we find that trained liquids provide 1.36 [Formula: see text] 0.18 times more interclass separation while retaining similar intraclass separation as compared to random liquids. Moreover, analysis of the linear separation property reveals that trained liquids are 2.05 [Formula: see text] 0.27 times better than random liquids. Furthermore, we show that our liquids are able to retain the generalization ability and generality of random liquids. A memory analysis shows that trained liquids have 83.67 [Formula: see text] 5.79 ms longer fading memory than random liquids, which have shown 92.8 [Formula: see text] 5.03 ms fading memory for a particular type of spike train inputs. We also throw some light on the dynamics of the evolution of recurrent connections within the liquid. Moreover, compared to separation-driven synaptic modification', a recently proposed algorithm for iteratively refining reservoirs, our learning rule provides 9.30%, 15.21%, and 12.52% more liquid separations and 2.8%, 9.1%, and 7.9% better classification accuracies for 4, 8, and 12 class pattern recognition tasks, respectively.

2006 ◽  
Vol 18 (3) ◽  
pp. 591-613 ◽  
Author(s):  
Peter Tiňo ◽  
Ashely J. S. Mills

We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.


1991 ◽  
Vol 3 (2) ◽  
pp. 201-212 ◽  
Author(s):  
Peter J. B. Hancock ◽  
Leslie S. Smith ◽  
William A. Phillips

We show that a form of synaptic plasticity recently discovered in slices of the rat visual cortex (Artola et al. 1990) can support an error-correcting learning rule. The rule increases weights when both pre- and postsynaptic units are highly active, and decreases them when pre-synaptic activity is high and postsynaptic activation is less than the threshold for weight increment but greater than a lower threshold. We show that this rule corrects false positive outputs in feedforward associative memory, that in an appropriate opponent-unit architecture it corrects misses, and that it performs better than the optimal Hebbian learning rule reported by Willshaw and Dayan (1990).


2020 ◽  
Vol 3 (2) ◽  
pp. 28
Author(s):  
Ran Cheng ◽  
Khalid B. Mirza ◽  
Konstantin Nikolic

This paper describes the design and modus of operation of a neuromorphic robotic platform based on SpiNNaker, and its implementation on the goalkeeper task. The robotic system utilises an address event representation (AER) type of camera (dynamic vision sensor (DVS)) to capture features of a moving ball, and a servo motor to position the goalkeeper to intercept the incoming ball. At the backbone of the system is a microcontroller (Arduino Due) which facilitates communication and control between different robot parts. A spiking neuronal network (SNN), which is running on SpiNNaker, predicts the location of arrival of the moving ball and decides where to place the goalkeeper. In our setup, the maximum data transmission speed of the closed-loop system is approximately 3000 packets per second for both uplink and downlink, and the robot can intercept balls whose speed is up to 1 m/s starting from the distance of about 0.8 m. The interception accuracy is up to 85%, the response latency is 6.5 ms and the maximum power consumption is 7.15 W. This is better than previous implementations based on PC. Here, a simplified version of an SNN has been developed for the ‘interception of a moving object’ task, for the purpose of demonstrating the platform, however a generalised SNN for this problem is a nontrivial problem. A demo video of the robot goalie is available on YouTube.


2002 ◽  
Vol 12 (03n04) ◽  
pp. 247-262 ◽  
Author(s):  
ROELOF K. BROUWER

This paper defines the truncated normalized max product operation for the transformation ofstates of a network and provides a method for solving a set of equations based on this operation. The operation serves as the transformation for the set of fully connected units in a recurrent network that otherwise might consist of linear threshold units. Component values of the state vector and ouputs of the units take on the values in the set {0, 0.1, …, 0.9, 1}. The result is a much larger state space given a particular number of units and size of connection matrix than for a network based on threshold units. Since the operation defined here can form the basis of transformations in a recurrent network with a finite number of states, fixed points or cycles are possible and the network based on this operation for transformations can be used as an associative memory or pattern classifier with fixed points taking on the role of prototypes. Discrete fully recurrent networks have proven themselves to be very useful as associative memories and as classifiers. However they are often based on units that have binary states. The effect of this is that the data to be processed consisting of vectors in ℜn have to be converted to vectors in {0, 1}m with m much larger than n since binary encoding based on positional notation is not feasible. This implies a large increase in the number of components. The effect can be lessened by allowing more states for each unit in our network. The network proposed demonstrates those properties that are desirable in an associative memory very well as the simulations show.


2020 ◽  
Author(s):  
Toviah Moldwin ◽  
Menachem Kalmenson ◽  
Idan Segev

Synaptic clustering on neuronal dendrites has been hypothesized to play an important role in implementing pattern recognition. Neighboring synapses on a dendritic branch can interact in a synergistic, cooperative manner via the nonlinear voltage-dependence of NMDA receptors. Inspired by the NMDA receptor, the single-branch clusteron learning algorithm (Mel 1991) takes advantage of location-dependent multiplicative nonlinearities to solve classification tasks by randomly shuffling the locations of “under-performing” synapses on a model dendrite during learning (“structural plasticity”), eventually resulting in synapses with correlated activity being placed next to each other on the dendrite. We propose an alternative model, the gradient clusteron, or G-clusteron, which uses an analytically-derived gradient descent rule where synapses are “attracted to” or “repelled from” each other in an input- and location- dependent manner. We demonstrate the classification ability of this algorithm by testing it on the MNIST handwritten digit dataset and show that, when using a softmax activation function, the accuracy of the G-clusteron on the All-vs-All MNIST task (85.9%) approaches that of logistic regression (92.6%). In addition to the synaptic location update plasticity rule, we also derive a learning rule for the synaptic weights of the G-clusteron (“functional plasticity”) and show that the G-clusteron with both plasticity rules can achieve 89.5% accuracy on the MNIST task and can learn to solve the XOR problem from arbitrary initial conditions.


1991 ◽  
Vol 3 (3) ◽  
pp. 375-385 ◽  
Author(s):  
A. D. Back ◽  
A. C. Tsoi

A new neural network architecture involving either local feedforward global feedforward, and/or local recurrent global feedforward structure is proposed. A learning rule minimizing a mean square error criterion is derived. The performance of this algorithm (local recurrent global feedforward architecture) is compared with a local-feedforward global-feedforward architecture. It is shown that the local-recurrent global-feedforward model performs better than the local-feedforward global-feedforward model.


1996 ◽  
Vol 8 (5) ◽  
pp. 895-938 ◽  
Author(s):  
Randall C. O'Reilly

The error backpropagation learning algorithm (BP) is generally considered biologically implausible because it does not use locally available, activation-based variables. A version of BP that can be computed locally using bidirectional activation recirculation (Hinton and McClelland 1988) instead of backpropagated error derivatives is more biologically plausible. This paper presents a generalized version of the recirculation algorithm (GeneRec), which overcomes several limitations of the earlier algorithm by using a generic recurrent network with sigmoidal units that can learn arbitrary input/output mappings. However, the contrastive Hebbian learning algorithm (CHL, also known as DBM or mean field learning) also uses local variables to perform error-driven learning in a sigmoidal recurrent network. CHL was derived in a stochastic framework (the Boltzmann machine), but has been extended to the deterministic case in various ways, all of which rely on problematic approximations and assumptions, leading some to conclude that it is fundamentally flawed. This paper shows that CHL can be derived instead from within the BP framework via the GeneRec algorithm. CHL is a symmetry-preserving version of GeneRec that uses a simple approximation to the midpoint or second-order accurate Runge-Kutta method of numerical integration, which explains the generally faster learning speed of CHL compared to BI. Thus, all known fully general error-driven learning algorithms that use local activation-based variables in deterministic networks can be considered variations of the GeneRec algorithm (and indirectly, of the backpropagation algorithm). GeneRec therefore provides a promising framework for thinking about how the brain might perform error-driven learning. To further this goal, an explicit biological mechanism is proposed that would be capable of implementing GeneRec-style learning. This mechanism is consistent with available evidence regarding synaptic modification in neurons in the neocortex and hippocampus, and makes further predictions.


2009 ◽  
Vol 21 (12) ◽  
pp. 3408-3428 ◽  
Author(s):  
Christian Leibold ◽  
Michael H. K. Bendels

Short-term synaptic plasticity is modulated by long-term synaptic changes. There is, however, no general agreement on the computational role of this interaction. Here, we derive a learning rule for the release probability and the maximal synaptic conductance in a circuit model with combined recurrent and feedforward connections that allows learning to discriminate among natural inputs. Short-term synaptic plasticity thereby provides a nonlinear expansion of the input space of a linear classifier, whereas the random recurrent network serves to decorrelate the expanded input space. Computer simulations reveal that the twofold increase in the number of input dimensions through short-term synaptic plasticity improves the performance of a standard perceptron up to 100%. The distributions of release probabilities and maximal synaptic conductances at the capacity limit strongly depend on the balance between excitation and inhibition. The model also suggests a new computational interpretation of spikes evoked by stimuli outside the classical receptive field. These neuronal activities may reflect decorrelation of the expanded stimulus space by intracortical synaptic connections.


1999 ◽  
Vol 11 (1) ◽  
pp. 117-137 ◽  
Author(s):  
Bruce Graham ◽  
David Willshaw

The associative net model of heteroassociative memory with binary-valued synapses has been extended to include recent experimental data indicating that in the hippocampus, one form of synaptic modification is a change in the probability of synaptic transmission. Pattern pairs are stored in the net by a version of the Hebbian learning rule that changes the probability of transmission at synapses where the presynaptic and post-synaptic units are simultaneously active from a low, base value to a high, modified value. Numerical calculations of the expected recall response of this stochastic associative net have been used to assess the performance for different values of the base and modified probabilities. If there is a cost incurred with generating the difference between these probabilities, then a difference of about 0.4 is optimal. This corresponds to the magnitude of change seen experimentally. Performance can be greatly enhanced by using multiple cue presentations during recall.


Sign in / Sign up

Export Citation Format

Share Document