scholarly journals ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH

2006 ◽  
Vol 16 (07) ◽  
pp. 1929-1950 ◽  
Author(s):  
GEORGE D. MAGOULAS ◽  
MICHAEL N. VRAHATIS

Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorithms have been proposed in the literature. This paper provides an overview of approaches to backpropagation training, emphazing on first-order adaptive learning algorithms that build on the theory of nonlinear optimization, and proposes a framework for their analysis in the context of deterministic optimization.

1994 ◽  
Vol 6 (2) ◽  
pp. 319-333 ◽  
Author(s):  
Michel Benaim

Feedforward neural networks with a single hidden layer using normalized gaussian units are studied. It is proved that such neural networks are capable of universal approximation in a satisfactory sense. Then, a hybrid learning rule as per Moody and Darken that combines unsupervised learning of hidden units and supervised learning of output units is considered. By using the method of ordinary differential equations for adaptive algorithms (ODE method) it is shown that the asymptotic properties of the learning rule may be studied in terms of an autonomous cascade of dynamical systems. Some recent results from Hirsch about cascades are used to show the asymptotic stability of the learning rule.


2014 ◽  
pp. 35-39
Author(s):  
Viktor Lokazyuk ◽  
Viktor Cheshun ◽  
Vitaliy Chornenkiy

The base principles of a technique of application of 3-layer feedforward fullconnected artificial neural network for execution of adaptive algorithms of testing of digital microprocessor devices are considered. The method of change of weight coefficients and thresholds of artificial neurons in the mode of operation of artificial neural network realized at the hardware level is considered. The application of this method provides implementation of adaptive algorithms of testing of the large complexity with the limited hardware resources of artificial neural network.


2002 ◽  
Vol 12 (01) ◽  
pp. 45-67 ◽  
Author(s):  
M. R. MEYBODI ◽  
H. BEIGY

One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (η), momentum factor (α) and steepness parameter (λ). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based solutions to the problem of adaptation of BP algorithm parameters. By interconnection of learning automata to the feedforward neural networks, we use learning automata scheme for adjusting the parameters η, α, and λ based on the observation of random response of the neural networks. One of the important aspects of the proposed schemes is its ability to escape from local minima with high possibility during the training period. The feasibility of proposed methods is shown through simulations on several problems.


Sign in / Sign up

Export Citation Format

Share Document