scholarly journals Performance of Levenberg-Marquardt Algorithm in Backpropagation Network Based on the Number of Neurons in Hidden Layers and Learning Rate

2020 ◽  
Vol 8 (1) ◽  
pp. 29
Author(s):  
Hindayati Mustafidah ◽  
Suwarsito Suwarsito

One of the supervised learning paradigms in artificial neural networks (ANN) that are in great developed is the backpropagation model. Backpropagation is a perceptron learning algorithm with many layers to change weights connected to neurons in hidden layers. The performance of the algorithm is influenced by several network parameters including the number of neurons in the input layer, the maximum epoch used, learning rate (lr) value, the hidden layer configuration, and the resulting error (MSE). Some of the tests conducted in previous studies obtained information that the Levenberg-Marquardt training algorithm has better performance than other algorithms in the backpropagation network, which produces the smallest average error with a test level of α = 5% which used 10 neurons in a hidden layer. The number of neurons in hidden layers varies depending on the number of neurons in the input layer. In this study an analysis of the performance of the Levenberg-Marquardt training algorithm was carried out with 5 neurons in the input layer, a number of n neurons in hidden layers (n = 2, 4, 5, 7, 9), and 1 neuron in the output layer. Performance analysis is based on network-generated errors. This study uses a mixed method, namely development research with quantitative and qualitative testing using ANOVA statistical tests. Based on the analysis, the Levenberg-Marquardt training algorithm produces the smallest error of 0.00014 ± 0.00018 on 9 neurons in hidden layers with lr = 0.5. Keywords: hidden layer, backpropogation, MSE, learning rate, Levenberg-Marquardt.

2019 ◽  
Vol 8 (4) ◽  
pp. 2349-2353

Backpropagation, as a learning method in artificial neural networks, is widely used to solve problems in various fields of life, including education. In this field, backpropagation is used to predict the validity of questions, student achievement, and the new student admission system. The performance of the training algorithm is said to be optimal can be seen from the error (MSE) generated by the network. The smaller the error produced, the more optimal the performance of the algorithm. Based on previous studies, we got information that the most optimal training algorithm based on the smallest error was Levenberg–Marquardt with an average MSE = 0.001 in the 5-10-1 model with a level of α = 5%. In this study, we test the Levenberg-Marquardt algorithm on 8, 12, 14, 16, 19 neurons in hidden layers. This algorithm is tested at the learning rate (LR) = 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1. This study uses mixed-method, namely development with quantitative and qualitative testing using ANOVA and correlation analysis. The research uses random data with ten neurons in the input layer and one neuron in the output layer. Based on ANOVA analysis of the five variations in the number of neurons in the hidden layer, the results showed that with α = 5% as previous research, the Levenberg–Marquardt algorithm produced the smallest MSE of 0.00019584038 ± 0.000239300998. The number of neurons in the hidden layer that reaches this MSE is 16 neurons at the level of LR = 0.8.


2019 ◽  
Vol 16 (1) ◽  
pp. 0116
Author(s):  
Al-Saif Et al.

       In this paper, we focus on designing feed forward neural network (FFNN) for solving Mixed Volterra – Fredholm Integral Equations (MVFIEs) of second kind in 2–dimensions. in our method, we present a multi – layers model consisting of a hidden layer which has five hidden units (neurons) and one linear output unit. Transfer function (Log – sigmoid) and training algorithm (Levenberg – Marquardt) are used as a sigmoid activation of each unit. A comparison between the results of numerical experiment and the analytic solution of some examples has been carried out in order to justify the efficiency and the accuracy of our method.                                  


2001 ◽  
Vol 11 (06) ◽  
pp. 573-583
Author(s):  
AKITO SAKURAI

We propose a stochastic learning algorithm for multilayer perceptrons of linear-threshold function units, which theoretically converges with probability one and experimentally exhibits 100% convergence rate and remarkable speed on parity and classification problems with typical generalization accuracy. For learning the n bit parity function with n hidden units, the algorithm converged on all the trials we tested (n=2 to 12) after 5.8· 4.1n presentations for 0.23· 4.0n-6 seconds on a 533MHz Alpha 21164A chip on average, which is five to ten times faster than Levenberg-Marquardt algorithm with restarts. For a medium size classification problem known as Thyroid in UCI repository, the algorithm is faster in speed and comparative in generalization accuracy than the standard backpropagation and Levenberg-Marquardt algorithms.


Author(s):  
Untari Novia Wisesty

The eye state detection is one of various task toward Brain Computer Interface system. The eye state can be read in brain signals. In this paper use EEG Eye State dataset (Rosler, 2013) from UCI Machine Learning Repository Database. Dataset is consisting of continuous 14 EEG measurements in 117 seconds. The eye states were marked as “1” or “0”. “1” indicates the eye-closed and “0” the eye-open state. The proposed schemes use Multi Layer Neural Network with Levenberg Marquardt optimization learning algorithm, as classification method.  Levenberg Marquardt method used to optimize the learning algorithm of neural network, because the standard algorithm has a weak convergence rate. It is need many iterations to have minimum error. Based on the analysis towards the experiment on the EEG dataset, it can be concluded that the proposed scheme can be implemented to detect the Eye State. The best accuracy gained from combination variable sigmoid function, data normalization and number of neurons are 31 (95.71%) for one hidden layer, and 98.912% for two hidden layers with number of neurons are 39 and 47 neurons and linear function.


2012 ◽  
Vol 6-7 ◽  
pp. 1098-1102 ◽  
Author(s):  
Dan Dan Cui ◽  
Fei Liu

BP algorithm is a typical artificial neural network learning algorithm, the main structure consists of an input layer, one or more hidden layer, an output layer, the layers of the number of neurons, the output of each node the value is decided by the input values, the role, function and threshold. The Internet of Things is based on the information carrier of the traditional telecommunications network, so that all can be individually addressable ordinary physical objects to achieve the interoperability network. The paper puts forward the application of BP neural network in internet of things. The experiment shows BP is superior to RFID in internet of things.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Sara Nanvakenari ◽  
Mitra Ghasemi ◽  
Kamyar Movagharnejad

Abstract In this study, the viscosity of hydrocarbon binary mixtures has been predicted with an artificial neural network and a group contribution method (ANN-GCM) by utilizing various training algorithm including Scaled Conjugate Gradient (SCG), Levenberg-Marquardt (LM), Resilient back Propagation (RP), and Gradient Descent with variable learning rate back propagation (GDX). Moreover, different transfer functions such as Tan-sigmoid (tansig), Log-sigmoid (logsig), and purelin were investigated in hidden and output layer and their effects on network precision were estimated. Accordingly, 796 experimental data points of viscosity of hydrocarbon binary mixture were collected from the literature for a wide range of operating parameters. The temperature, pressure, mole fraction, molecular weight, and structural group of the system were selected as the independent input parameters. The statistical analysis results with R 2 = 0.99 revealed a small value for Average absolute relative deviation (AARD) of 1.288 and Mean square error (MSE) of 0.001018 by comparing the ANN predicted data with experimental data. Neural network configuration was also optimized. Based on the results, the network with one hidden layer and 27 neurons with the Levenberg-Marquardt training algorithm and tansig transfer function for hidden layer along with purelin transfer function for output layer constituted the best network structure. Further, the weights and bias were optimized to minimize the error. Then, the obtained results of the present study were compared with the data from some previous methods. The results suggested that this work can predict the viscosity of hydrocarbon binary mixture with better AARD. In general, the results indicated that combining ANN and GCM model is capable to predict the viscosity of hydrocarbon binary mixtures with a good accuracy.


Author(s):  
Safae El Abkari ◽  
Jamal El Mhamdi ◽  
El Hassan El Abkari

Locating services have come under the spotlight in recent years in various applications. However, locating methods that use received signal strength have low accuracy due to signal fluctuations. For this purpose, we present a Wi-Fi based locating system using artificial neural network to enhance the positioning process performances. We optimized the Levenberg Marquardt algorithm to propose the better configuration of the multi-layer time-delay perception neural network. We achieved an average error of 10.3 centimeters with a grid of 0.4 meter in four tests. Yet, due to the instability of the received signal strength RSS-based locating systems present a limitation in the resolution finesse that depends on the grid size.


Sign in / Sign up

Export Citation Format

Share Document