LEARNING OF THE HOPFIELD ASSOCIATIVE MEMORY BY GLOBAL MINIMIZATION

Author(s):  
TAO WANG

In the paper, a learning algorithm for Hopfield associative memories (HAMs) is presented. According to the cost function that measures the goodness of the HAM, we determine the connection matrix using a global minimization, solved by a gradient descent rule. This optimal learning method can guarantee the storage of all training patterns with basins of attraction that are as large as possible. We also study the storage capacity of the HAM, the asymptotic stability of each training pattern and its basin of attraction. A large number of computer simulations have been conducted to show its performance.

Author(s):  
TAO WANG ◽  
XIAOLIANG XING ◽  
XINHUA ZHUANG

In this paper, we describe an optimal learning algorithm for designing one-layer neural networks by means of global minimization. Taking the properties of a well-defined neural network into account, we derive a cost function to measure the goodness of the network quantitatively. The connection weights are determined by the gradient descent rule to minimize the cost function. The optimal learning algorithm is formed as either the unconstraint-based or the constraint-based minimization problem. It ensures the realization of each desired associative mapping with the best noise reduction ability in the sense of optimization. We also investigate the storage capacity of the neural network, the degree of noise reduction for a desired associative mapping, and the convergence of the learning algorithm in an analytic way. Finally, a large number of computer experimental results are presented.


2008 ◽  
Vol 18 (02) ◽  
pp. 147-156 ◽  
Author(s):  
MASAKI KOBAYASHI

HAM (Hopfield Associative Memory) and BAM (Bidirectinal Associative Memory) are representative associative memories by neural networks. The storage capacity by the Hebb rule, which is often used, is extremely low. In order to improve it, some learning methods, for example, pseudo-inverse matrix learning and gradient descent learning, have been introduced. Oh introduced pseudo-relaxation learning algorithm to HAM and BAM. In order to accelerate it, Hattori proposed quick learning. Noest proposed CAM (Complex-valued Associative Memory), which is complex-valued HAM. The storage capacity of CAM by the Hebb rule is also extremely low. Pseudo-inverse matrix learning and gradient descent learning have already been generalized to CAM. In this paper, we apply pseudo-relaxation learning algorithm to CAM in order to improve the capacity.


2001 ◽  
Vol 11 (01) ◽  
pp. 79-88 ◽  
Author(s):  
JOHN A. BULLINARIA ◽  
PATRICIA M. RIDDELL

Setting up a neural network with a learning algorithm that determines how it can best operate is an efficient way to formulate control systems for many engineering applications, and is often much more feasible than direct programming. This paper examines three important aspects of this approach: the details of the cost function that is used with the gradient descent learning algorithm, how the resulting system depends on the initial pre-learning connection weights, and how the resulting system depends on the pattern of learning rates chosen for the different components of the system. We explore these issues by explicit simulations of a toy model that is a simplified abstraction of part of the human oculomotor control system. This allows us to compare our system with that produced by human evolution and development. We can then go on to consider how we might improve on the human system and apply what we have learnt to control systems that have no human analogue.


1995 ◽  
Vol 06 (04) ◽  
pp. 455-462
Author(s):  
DONQ-LIANG LEE ◽  
WEN-JUNE WANG

A new concept called correlation significance for expanding the attraction regions around all the stored vectors (attractors) of an asynchronous auto-associative memory is introduced. Since the well known outer product rule adopts equally-weighted correlation matrix for the neuron connections, the attraction region around each attractor is not maximized. In order to maximize these attraction regions, we devise a rule that all the correlations between two different components of two different stored patterns should be unequally weighted. By this formalism, the connection matrix T of the asynchronous neural network is designed by using the gradient descent approach. Additionally, an exponential type error function is constructed such that the number of successfully stored vectors can be directly examined during the entire learning process. Finally, computer simulations demonstrate the efficiency and capability of this scheme.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-28
Author(s):  
Xueyan Liu ◽  
Bo Yang ◽  
Hechang Chen ◽  
Katarzyna Musial ◽  
Hongxu Chen ◽  
...  

Stochastic blockmodel (SBM) is a widely used statistical network representation model, with good interpretability, expressiveness, generalization, and flexibility, which has become prevalent and important in the field of network science over the last years. However, learning an optimal SBM for a given network is an NP-hard problem. This results in significant limitations when it comes to applications of SBMs in large-scale networks, because of the significant computational overhead of existing SBM models, as well as their learning methods. Reducing the cost of SBM learning and making it scalable for handling large-scale networks, while maintaining the good theoretical properties of SBM, remains an unresolved problem. In this work, we address this challenging task from a novel perspective of model redefinition. We propose a novel redefined SBM with Poisson distribution and its block-wise learning algorithm that can efficiently analyse large-scale networks. Extensive validation conducted on both artificial and real-world data shows that our proposed method significantly outperforms the state-of-the-art methods in terms of a reasonable trade-off between accuracy and scalability. 1


2021 ◽  
Vol 10 (1) ◽  
pp. 21
Author(s):  
Omar Nassef ◽  
Toktam Mahmoodi ◽  
Foivos Michelinakis ◽  
Kashif Mahmood ◽  
Ahmed Elmokashfi

This paper presents a data driven framework for performance optimisation of Narrow-Band IoT user equipment. The proposed framework is an edge micro-service that suggests one-time configurations to user equipment communicating with a base station. Suggested configurations are delivered from a Configuration Advocate, to improve energy consumption, delay, throughput or a combination of those metrics, depending on the user-end device and the application. Reinforcement learning utilising gradient descent and genetic algorithm is adopted synchronously with machine and deep learning algorithms to predict the environmental states and suggest an optimal configuration. The results highlight the adaptability of the Deep Neural Network in the prediction of intermediary environmental states, additionally the results present superior performance of the genetic reinforcement learning algorithm regarding its performance optimisation.


1989 ◽  
Vol 36 (5) ◽  
pp. 762-766 ◽  
Author(s):  
M. Verleysen ◽  
B. Sirletti ◽  
A. Vandemeulebroecke ◽  
P.G.A. Jespers

Sign in / Sign up

Export Citation Format

Share Document