scholarly journals Fast Fading Channel Neural Equalization Using Levenberg-Marquardt Training Algorithm and Pulse Shaping Filters

Author(s):  
Tiago Mota ◽  
Jorgean Leal ◽  
Antônio Lima
2019 ◽  
Vol 16 (1) ◽  
pp. 0116
Author(s):  
Al-Saif Et al.

       In this paper, we focus on designing feed forward neural network (FFNN) for solving Mixed Volterra – Fredholm Integral Equations (MVFIEs) of second kind in 2–dimensions. in our method, we present a multi – layers model consisting of a hidden layer which has five hidden units (neurons) and one linear output unit. Transfer function (Log – sigmoid) and training algorithm (Levenberg – Marquardt) are used as a sigmoid activation of each unit. A comparison between the results of numerical experiment and the analytic solution of some examples has been carried out in order to justify the efficiency and the accuracy of our method.                                  


2019 ◽  
Vol 8 (4) ◽  
pp. 2349-2353

Backpropagation, as a learning method in artificial neural networks, is widely used to solve problems in various fields of life, including education. In this field, backpropagation is used to predict the validity of questions, student achievement, and the new student admission system. The performance of the training algorithm is said to be optimal can be seen from the error (MSE) generated by the network. The smaller the error produced, the more optimal the performance of the algorithm. Based on previous studies, we got information that the most optimal training algorithm based on the smallest error was Levenberg–Marquardt with an average MSE = 0.001 in the 5-10-1 model with a level of α = 5%. In this study, we test the Levenberg-Marquardt algorithm on 8, 12, 14, 16, 19 neurons in hidden layers. This algorithm is tested at the learning rate (LR) = 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1. This study uses mixed-method, namely development with quantitative and qualitative testing using ANOVA and correlation analysis. The research uses random data with ten neurons in the input layer and one neuron in the output layer. Based on ANOVA analysis of the five variations in the number of neurons in the hidden layer, the results showed that with α = 5% as previous research, the Levenberg–Marquardt algorithm produced the smallest MSE of 0.00019584038 ± 0.000239300998. The number of neurons in the hidden layer that reaches this MSE is 16 neurons at the level of LR = 0.8.


Sign in / Sign up

Export Citation Format

Share Document