backpropagation learning
Recently Published Documents


TOTAL DOCUMENTS

155
(FIVE YEARS 16)

H-INDEX

25
(FIVE YEARS 1)

2021 ◽  
Vol 14 (2) ◽  
pp. 118-124
Author(s):  
Dedi Rosadi ◽  
Deasy Arisanty ◽  
Dina Agustina

Forest fire is one of important catastrophic events and have great impact on environment, infrastructure and human life. In this study, we discuss the method for prediction of the size of the forest fire using the hybrid approach between Fuzzy-C-Means clustering (FCM) and Neural Networks (NN) classification with backpropagation learning and extreme learning machine approach. For comparison purpose, we consider a similar hybrid approach, i.e., FCM with the classical Support Vector Machine (SVM) classification approach. In the empirical study, we apply the considered methods using several meteorological and Forest Weather Index (FWI) variables. We found that the best approach will be obtained using hybrid FCM-SVM for data training, where the best performance obtains for hybrid FCM-NN-backpropagation for data testing.


2021 ◽  
Author(s):  
Mabel Frias ◽  
Gonzalo Napoles ◽  
Koen Vanhoof ◽  
Yaima Filiberto ◽  
Rafael Bello

2021 ◽  
pp. 1-33
Author(s):  
Andreas Knoblauch

Abstract Abstract supervised learning corresponds to minimizing a loss or cost function expressing the differences between model predictions yn and the target values tn given by the training data. In neural networks, this means backpropagating error signals through the transposed weight matrixes from the output layer toward the input layer. For this, error signals in the output layer are typically initialized by the difference yn - tn, which is optimal for several commonly used loss functions like cross-entropy or sum of squared errors. Here I evaluate a more general error initialization method using power functions |yn - tn|q for q>0, corresponding to a new family of loss functions that generalize cross-entropy. Surprisingly, experiments on various learning tasks reveal that a proper choice of q can significantly improve the speed and convergence of backpropagation learning, in particular in deep and recurrent neural networks. The results suggest two main reasons for the observed improvements. First, compared to cross-entropy, the new loss functions provide better fits to the distribution of error signals in the output layer and therefore maximize the model's likelihood more efficiently. Second, the new error initialization procedure may often provide a better gradient-to-loss ratio over a broad range of neural output activity, thereby avoiding flat loss landscapes with vanishing gradients.


Author(s):  
Andi Hamdianah

Rice is the staple food for most of the population in Indonesia which is processed from rice plants. To meet the needs and food security in Indonesia, a prediction is required. The predictions are carried out to find out the annual yield of rice in an area. Weather factors greatly affect production results so that in this study using weather parameters as input parameters. The Input Parameters are used in the Recurrent Neural Network algorithm with the Backpropagation learning process. The results are compared with Neural Networks with Backpropagation learning to find out the most effective method. In this study, the Recurrent Neural Network has better prediction results compared to a Neural Network. Based on the computational experiments, it is found that the Recurrent Neural Network obtained a Means Square Error of 0.000878 and a Mean Absolute Percentage Error of 10,8832%, while the Neural Network obtained a Means Square Error of 0.00104 and a Mean Absolute Percentage Error of 10,3804.


Sign in / Sign up

Export Citation Format

Share Document