Bayesian Bidirectional Backpropagation Learning

Author(s):  
Olaoluwa Adigun ◽  
Bart Kosko
1990 ◽  
Vol 2 (3) ◽  
pp. 363-373 ◽  
Author(s):  
Paul W. Hollis ◽  
John S. Harper ◽  
John J. Paulos

This paper presents a study of precision constraints imposed by a hybrid chip architecture with analog neurons and digital backpropagation calculations. Conversions between the analog and digital domains and weight storage restrictions impose precision limits on both analog and digital calculations. It is shown through simulations that a learning system of this nature can be implemented in spite of limited resolution in the analog circuits and using fixed point arithmetic to implement the backpropagation algorithm.


Author(s):  
Ade chandra Saputra

One of the weakness in backpropagation Artificial neural network(ANN) is being stuck in local minima. Learning rate parameter is an important parameter in order to determine how fast the ANN Learning. This research is conducted to determine a method of finding the value of learning rate parameter using a genetic algorithm when neural network learning stops and the error value is not reached the stopping criteria or has not reached the convergence. Genetic algorithm is used to determine the value of learning rate used is based on the calculation of the fitness function with the input of the ANN weights, gradient error, and bias. The calculation of the fitness function will produce an error value of each learning rate which represents each candidate solutions or individual genetic algorithms. Each individual is determined by sum of squared error value. One with the smallest SSE is the best individual. The value of learning rate has chosen will be used to continue learning so that it can lower the value of the error or speed up the learning towards convergence. The final result of this study is to provide a new solution to resolve the problem in the backpropagation learning that often have problems in determining the learning parameters. These results indicate that the method of genetic algorithms can provide a solution for backpropagation learning in order to decrease the value of SSE when learning of ANN has been static in large error conditions, or stuck in local minima


Sign in / Sign up

Export Citation Format

Share Document