PSEUDO-RELAXATION LEARNING ALGORITHM FOR COMPLEX-VALUED ASSOCIATIVE MEMORY

2008 ◽  
Vol 18 (02) ◽  
pp. 147-156 ◽  
Author(s):  
MASAKI KOBAYASHI

HAM (Hopfield Associative Memory) and BAM (Bidirectinal Associative Memory) are representative associative memories by neural networks. The storage capacity by the Hebb rule, which is often used, is extremely low. In order to improve it, some learning methods, for example, pseudo-inverse matrix learning and gradient descent learning, have been introduced. Oh introduced pseudo-relaxation learning algorithm to HAM and BAM. In order to accelerate it, Hattori proposed quick learning. Noest proposed CAM (Complex-valued Associative Memory), which is complex-valued HAM. The storage capacity of CAM by the Hebb rule is also extremely low. Pseudo-inverse matrix learning and gradient descent learning have already been generalized to CAM. In this paper, we apply pseudo-relaxation learning algorithm to CAM in order to improve the capacity.

Author(s):  
TAO WANG ◽  
XIAOLIANG XING ◽  
XINHUA ZHUANG

In this paper, we describe an optimal learning algorithm for designing one-layer neural networks by means of global minimization. Taking the properties of a well-defined neural network into account, we derive a cost function to measure the goodness of the network quantitatively. The connection weights are determined by the gradient descent rule to minimize the cost function. The optimal learning algorithm is formed as either the unconstraint-based or the constraint-based minimization problem. It ensures the realization of each desired associative mapping with the best noise reduction ability in the sense of optimization. We also investigate the storage capacity of the neural network, the degree of noise reduction for a desired associative mapping, and the convergence of the learning algorithm in an analytic way. Finally, a large number of computer experimental results are presented.


Author(s):  
TAO WANG

In the paper, a learning algorithm for Hopfield associative memories (HAMs) is presented. According to the cost function that measures the goodness of the HAM, we determine the connection matrix using a global minimization, solved by a gradient descent rule. This optimal learning method can guarantee the storage of all training patterns with basins of attraction that are as large as possible. We also study the storage capacity of the HAM, the asymptotic stability of each training pattern and its basin of attraction. A large number of computer simulations have been conducted to show its performance.


2009 ◽  
Vol 72 (16-18) ◽  
pp. 3771-3781 ◽  
Author(s):  
R. Savitha ◽  
S. Suresh ◽  
N. Sundararajan ◽  
P. Saratchandran

1994 ◽  
Vol 05 (01) ◽  
pp. 67-75 ◽  
Author(s):  
BYOUNG-TAK ZHANG

Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative approach in which the learning proceeds on an increasing number of selected training examples, starting with a small training set. We derive a measure of criticality of examples and present an incremental learning algorithm that uses this measure to select a critical subset of given examples for solving the particular task. Our experimental results suggest that the method can significantly improve training speed and generalization performance in many real applications of neural networks. This method can be used in conjunction with other variations of gradient descent algorithms.


Sign in / Sign up

Export Citation Format

Share Document