The Improved Training Algorithm of Deep Learning with Self-Adaptive Learning Rate

Author(s):  
Sutit Ongart ◽  
Kietikul Jearanaitanakij ◽  
Jirapat Sangthong
2008 ◽  
Vol 2008 ◽  
pp. 1-8 ◽  
Author(s):  
Talel Korkobi ◽  
Mohamed Djemel ◽  
Mohamed Chtourou

This paper treats some problems related to nonlinear systems identification. A stability analysis neural network model for identifying nonlinear dynamic systems is presented. A constrained adaptive stable backpropagation updating law is presented and used in the proposed identification approach. The proposed backpropagation training algorithm is modified to obtain an adaptive learning rate guarantying convergence stability. The proposed learning rule is the backpropagation algorithm under the condition that the learning rate belongs to a specified range defining the stability domain. Satisfying such condition, unstable phenomena during the learning process are avoided. A Lyapunov analysis leads to the computation of the expression of a convenient adaptive learning rate verifying the convergence stability criteria. Finally, the elaborated training algorithm is applied in several simulations. The results confirm the effectiveness of the CSBP algorithm.


2021 ◽  
Vol 11 (20) ◽  
pp. 9468
Author(s):  
Yunyun Sun ◽  
Yutong Liu ◽  
Haocheng Zhou ◽  
Huijuan Hu

Deep learning proves its promising results in various domains. The automatic identification of plant diseases with deep convolutional neural networks attracts a lot of attention at present. This article extends stochastic gradient descent momentum optimizer and presents a discount momentum (DM) deep learning optimizer for plant diseases identification. To examine the recognition and generalization capability of the DM optimizer, we discuss the hyper-parameter tuning and convolutional neural networks models across the plantvillage dataset. We further conduct comparison experiments on popular non-adaptive learning rate methods. The proposed approach achieves an average validation accuracy of no less than 97% for plant diseases prediction on several state-of-the-art deep learning models and holds a low sensitivity to hyper-parameter settings. Experimental results demonstrate that the DM method can bring a higher identification performance, while still maintaining a competitive performance over other non-adaptive learning rate methods in terms of both training speed and generalization.


2007 ◽  
Vol 70 (16-18) ◽  
pp. 2687-2691 ◽  
Author(s):  
A. Nied ◽  
S.I. Seleme ◽  
G.G. Parma ◽  
B.R. Menezes

Author(s):  
Mahmoud Smaida ◽  
Serhii Yaroshchak ◽  
Ahmed Y. Ben Sasi

One of the most important hyper-parameters for model training and generalization is the learning rate. Recently, many research studies have shown that optimizing the learning rate schedule is very useful for training deep neural networks to get accurate and efficient results. In this paper, different learning rate schedules using some comprehensive optimization techniques have been compared in order to measure the accuracy of a convolutional neural network CNN model to classify four ophthalmic conditions. In this work, a deep learning CNN based on Keras and TensorFlow has been deployed using Python on a database that contains 1692 images, which consists of four types of ophthalmic cases: Glaucoma, Myopia, Diabetic retinopathy, and Normal eyes. The CNN model has been trained on Google Colab. GPU with different learning rate schedules and adaptive learning algorithms. Constant learning rate, time-based decay, step-based decay, exponential decay, and adaptive learning rate optimization techniques for deep learning have been addressed. Adam adaptive learning rate method. has outperformed the other optimization techniques and achieved the best model accuracy of 92.58% for training set and 80.49% for validation datasets, respectively.


Sign in / Sign up

Export Citation Format

Share Document