Improving Steepest Descent Method by Learning Rate Annealing and Momentum in Neural Network

Author(s):  
Udai Bhan Trivedi ◽  
Priti Mishra
1994 ◽  
Vol 05 (04) ◽  
pp. 299-312
Author(s):  
ROBERT N. SHARPE ◽  
MO-YUEN CHOW

The neural network designer must take into consideration many factors when selecting an appropriate network configuration. The performance of a given network configuration is influenced by many different factors such as: accuracy, training time, sensitivity, and the number of neurons used in the implementation. Using a cost function based on the four criteria mentioned previously, the various network paradigms can be evaluated relative to one another. If the mathematical models of the evaluation criteria as functions of the network configuration are known, then traditional techniques (such as the steepest descent method) could be used to determine the optimal network configuration. The difficulty in selecting an appropriate network configuration is due to the difficulty involved in determining the mathematical models of the evaluation criteria. This difficulty can be avoided by using fuzzy logic techniques to perform the network optimization as opposed to the traditional techniques. Fuzzy logic avoids the need of a detailed mathematical description of the relationship between the network performance and the network configuration, by using heuristic reasoning and linguistic variables. A comparison will be made between the fuzzy logic approach and the steepest descent method for the optimization of the cost function. The fuzzy optimization procedure could be applied to other systems where there is a priori information about their characteristics.


2020 ◽  
Vol 10 (6) ◽  
pp. 2036 ◽  
Author(s):  
Israel Elias ◽  
José de Jesús Rubio ◽  
David Ricardo Cruz ◽  
Genaro Ochoa ◽  
Juan Francisco Novoa ◽  
...  

The steepest descent method is frequently used for neural network tuning. Mini-batches are commonly used to get better tuning of the steepest descent in the neural network. Nevertheless, steepest descent with mini-batches could be delayed in reaching a minimum. The Hessian could be quicker than the steepest descent in reaching a minimum, and it is easier to achieve this goal by using the Hessian with mini-batches. In this article, the Hessian is combined with mini-batches for neural network tuning. The discussed algorithm is applied for electrical demand prediction.


2016 ◽  
Vol 18 (2) ◽  
pp. 111-121 ◽  
Author(s):  
Vandana Sakhre ◽  
Sanjeev Jain ◽  
V. S. Sapkal ◽  
D.P. Agarwal

Abstract In this research work, neural network based single loop and cascaded control strategies, based on Feed Forward Neural Network trained with Back Propagation (FBPNN) algorithm is carried out to control the product composition of reactive distillation. The FBPNN is modified using the steepest descent method. This modification is suggested for optimization of error function. The weights connecting the input and hidden layer, hidden and output layer is optimized using steepest descent method which causes minimization of mean square error and hence improves the response of the system. FBPNN, as the inferential soft sensor is used for composition estimation of reactive distillation using temperature as a secondary process variable. The optimized temperature profile of the reactive distillation is selected as input to the neural network. Reboiler heat duty is selected as a manipulating variable in case of single loop control strategy while the bottom stage temperature T9 is selected as a manipulating variable for cascaded control strategy. It has been observed that modified FBPNN gives minimum mean square error. It has also been observed from the results that cascaded control structure gives improved dynamic response as compared to the single loop control strategy.


Sign in / Sign up

Export Citation Format

Share Document