EFFECTS OF DILATION AND TRANSLATION ON A PERCEPTRON-TYPE LEARNING RULE FOR HIGHER ORDER HOPFIELD NEURAL NETWORKS

2002 ◽  
Vol 12 (02) ◽  
pp. 83-93 ◽  
Author(s):  
BURKHARD LENZE ◽  
JÖRG RADDATZ

In this paper, we will take a further look at a generalized perceptron-like learning rule which uses dilation and translation parameters in order to enhance the recall performance of higher order Hopfield neural networks without significantly increasing their complexity. We will practically study the influence of these parameters on the perceptron learning and recall process, using a generalized version of the Hebbian learning rule for initialization. Our analysis will be based on a pattern recognition problem with random patterns. We will see that in case of a highly correlated set of patterns, there can be gained some improvements concerning the learning and recall performance. On the other hand, we will show that the dilation and translation parameters have to be chosen carefully for a positive result.

2017 ◽  
Vol 7 (4) ◽  
pp. 257-264 ◽  
Author(s):  
Toshifumi Minemoto ◽  
Teijiro Isokawa ◽  
Haruhiko Nishimura ◽  
Nobuyuki Matsui

AbstractHebbian learning rule is well known as a memory storing scheme for associative memory models. This scheme is simple and fast, however, its performance gets decreased when memory patterns are not orthogonal each other. Pseudo-orthogonalization is a decorrelating method for memory patterns which uses XNOR masking between the memory patterns and randomly generated patterns. By a combination of this method and Hebbian learning rule, storage capacity of associative memory concerning non-orthogonal patterns is improved without high computational cost. The memory patterns can also be retrieved based on a simulated annealing method by using an external stimulus pattern. By utilizing complex numbers and quaternions, we can extend the pseudo-orthogonalization for complex-valued and quaternionic Hopfield neural networks. In this paper, the extended pseudo-orthogonalization methods for associative memories based on complex numbers and quaternions are examined from the viewpoint of correlations in memory patterns. We show that the method has stable recall performance on highly correlated memory patterns compared to the conventional real-valued method.


1989 ◽  
Vol 03 (07) ◽  
pp. 555-560 ◽  
Author(s):  
M.V. TSODYKS

We consider the Hopfield model with the most simple form of the Hebbian learning rule, when only simultaneous activity of pre- and post-synaptic neurons leads to modification of synapse. An extra inhibition proportional to full network activity is needed. Both symmetric nondiluted and asymmetric diluted networks are considered. The model performs well at extremely low level of activity p<K−1/2, where K is the mean number of synapses per neuron.


1992 ◽  
Vol 03 (01) ◽  
pp. 83-101 ◽  
Author(s):  
D. Saad

The Minimal Trajectory (MINT) algorithm for training recurrent neural networks with a stable end point is based on an algorithmic search for the systems’ representations in the neighbourhood of the minimal trajectory connecting the input-output representations. The said representations appear to be the most probable set for solving the global perceptron problem related to the common weight matrix, connecting all representations of successive time steps in a recurrent discrete neural networks. The search for a proper set of system representations is aided by representation modification rules similar to those presented in our former paper,1 aimed to support contributing hidden and non-end-point representations while supressing non-contributing ones. Similar representation modification rules were used in other training methods for feed-forward networks,2–4 based on modification of the internal representations. A feed-forward version of the MINT algorithm will be presented in another paper.5 Once a proper set of system representations is chosen, the weight matrix is then modified accordingly, via the Perceptron Learning Rule (PLR) to obtain the proper input-output relation. Computer simulations carried out for the restricted cases of parity and teacher-net problems show rapid convergence of the algorithm in comparison with other existing algorithms, together with modest memory requirements.


2010 ◽  
Vol 22 (6) ◽  
pp. 1399-1444 ◽  
Author(s):  
Michael Pfeiffer ◽  
Bernhard Nessler ◽  
Rodney J. Douglas ◽  
Wolfgang Maass

We introduce a framework for decision making in which the learning of decision making is reduced to its simplest and biologically most plausible form: Hebbian learning on a linear neuron. We cast our Bayesian-Hebb learning rule as reinforcement learning in which certain decisions are rewarded and prove that each synaptic weight will on average converge exponentially fast to the log-odd of receiving a reward when its pre- and postsynaptic neurons are active. In our simple architecture, a particular action is selected from the set of candidate actions by a winner-take-all operation. The global reward assigned to this action then modulates the update of each synapse. Apart from this global reward signal, our reward-modulated Bayesian Hebb rule is a pure Hebb update that depends only on the coactivation of the pre- and postsynaptic neurons, not on the weighted sum of all presynaptic inputs to the postsynaptic neuron as in the perceptron learning rule or the Rescorla-Wagner rule. This simple approach to action-selection learning requires that information about sensory inputs be presented to the Bayesian decision stage in a suitably preprocessed form resulting from other adaptive processes (acting on a larger timescale) that detect salient dependencies among input features. Hence our proposed framework for fast learning of decisions also provides interesting new hypotheses regarding neural nodes and computational goals of cortical areas that provide input to the final decision stage.


2005 ◽  
Vol 17 (10) ◽  
pp. 2106-2138 ◽  
Author(s):  
Walter Senn ◽  
Stefano Fusi

Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.


Sign in / Sign up

Export Citation Format

Share Document