Neighborhood based modified backpropagation algorithm using adaptive learning parameters for training feedforward neural networks

2009 ◽  
Vol 72 (16-18) ◽  
pp. 3915-3921 ◽  
Author(s):  
T. Kathirvalavakumar ◽  
S. Jeyaseeli Subavathi
2006 ◽  
Vol 16 (07) ◽  
pp. 1929-1950 ◽  
Author(s):  
GEORGE D. MAGOULAS ◽  
MICHAEL N. VRAHATIS

Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorithms have been proposed in the literature. This paper provides an overview of approaches to backpropagation training, emphazing on first-order adaptive learning algorithms that build on the theory of nonlinear optimization, and proposes a framework for their analysis in the context of deterministic optimization.


2002 ◽  
Vol 12 (01) ◽  
pp. 45-67 ◽  
Author(s):  
M. R. MEYBODI ◽  
H. BEIGY

One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (η), momentum factor (α) and steepness parameter (λ). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based solutions to the problem of adaptation of BP algorithm parameters. By interconnection of learning automata to the feedforward neural networks, we use learning automata scheme for adjusting the parameters η, α, and λ based on the observation of random response of the neural networks. One of the important aspects of the proposed schemes is its ability to escape from local minima with high possibility during the training period. The feasibility of proposed methods is shown through simulations on several problems.


Athenea ◽  
2021 ◽  
Vol 2 (5) ◽  
pp. 29-34
Author(s):  
Alexander Caicedo ◽  
Anthony Caicedo

The era of the technological revolution increasingly encourages the development of technologies that facilitate in one way or another people's daily activities, thus generating a great advance in information processing. The purpose of this work is to implement a neural network that allows classifying the emotional states of a person based on the different human gestures. A database is used with information on students from the PUCE-E School of Computer Science and Engineering. Said information are images that express the gestures of the students and with which the comparative analysis with the input data is carried out. The environment in which this work converges proposes that the implementation of this project be carried out under the programming of a multilayer neuralnetwork. Multilayer feeding neural networks possess a number of properties that make them particularly suitable for complex pattern classification problems [8]. Back-Propagation [4], which is a backpropagation algorithm used in the Feedforward neural network, was taken into consideration to solve the classification of emotions. Keywords: Image processing, neural networks, gestures, back-propagation, feedforward, classification, emotions. References [1]S. Gangwar, S. Shukla, D. Arora. “Human Emotion Recognition by Using Pattern Recognition Network”, Journal of Engineering Research and Applications, Vol. 3, Issue 5, pp.535-539, 2013. [2]K. Rohit. “Back Propagation Neural Network based Emotion Recognition System”. International Journal of Engineering Trends and Technology (IJETT), Vol. 22, Nº 4, 2015. [3]S. Eishu, K. Ranju, S. Malika, “Speech Emotion Recognition using BFO and BPNN”, International Journal of Advances in Science and Technology (IJAST), ISSN2348-5426, Vol. 2 Issue 3, 2014. [4]A. Fiszelew, R. García-Martínez and T. de Buenos Aires. “Generación automática de redes neuronales con ajuste de parámetros basado en algoritmos genéticos”. Revista del Instituto Tecnológico de Buenos Aires, 26, 76-101, 2002. [5]Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. “Handwritten digit recognition with a back-propagation network”. In Advances in neural information processing systems. pp. 396-404, 1990. [6]G. Bebis and M. Georgiopoulos. “Feed-forward neural networks”. IEEE Potentials, 13(4), 27-31, 1994. [7]G. Huang, Q. Zhu and C. Siew. “Extreme learning machine: a new learning scheme of feedforward neural networks”. In Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference. Vol. 2, pp. 985-990. IEEE, 2004. [8]D. Montana and L. Davis. “Training Feedforward Neural Networks Using Genetic Algorithms”. In IJCAI, Vol. 89, pp. 762-767, 1989. [9]I. Sutskever, O. Vinyals and Q. Le. “Sequence to sequence learning with neural networks”. In Advances in neural information processing systems. pp. 3104-3112, 2014. [10]J. Schmidhuber. “Deep learning in neural networks: An overview”. Neural networks, 61, 85-117, 2015. [11]R. Santos, M. Ruppb, S. Bonzi and A. Filetia, “Comparación entre redes neuronales feedforward de múltiples capas y una red de función radial para detectar y localizar fugas en tuberías que transportan gas”. Chem. Ing.Trans , 32 (1375), e1380, 2013.


Sign in / Sign up

Export Citation Format

Share Document