scholarly journals Jaringan Saraf Tiruan Memprediksi Nilai Pemelajaran Siswa Dengan Metode Backpropagation ( Studi kasus : SMP Negeri 1 Salapian)

2021 ◽  
Vol 1 (2) ◽  
pp. 54-58
Author(s):  
Ninta Liana Br Sitepu

Backpropagationcial neural networks are one of the artificial representations of the human brain that are always trying to stimulate the learning process of the human brain. Backpropagation is a gradient descent method to minimize the squared of the output error. Backprorpagation works through an iterative process using a set of sample data (training data), comparing the predicted value of the network with each sample data. In each process, the weight of the relation in the network is modified to minimize the Mean Squared Error value between the predicted value from the network and the actual value. The purpose of this thesis is to be able to help teachers at SMP Negeri 1 Salakaran to predict the value of student learning. In the calculation using the maximum epouch = 10000, the target error is 0.01, and the learning rate is 0.3, then there is a calculation result where the need ratio A has a value of 0.7517, which means that the value has decreased and D has a value of 0.9202 which means that this value has increased..

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Jinhuan Duan ◽  
Xianxian Li ◽  
Shiqi Gao ◽  
Zili Zhong ◽  
Jinyan Wang

With the vigorous development of artificial intelligence technology, various engineering technology applications have been implemented one after another. The gradient descent method plays an important role in solving various optimization problems, due to its simple structure, good stability, and easy implementation. However, in multinode machine learning system, the gradients usually need to be shared, which will cause privacy leakage, because attackers can infer training data with the gradient information. In this paper, to prevent gradient leakage while keeping the accuracy of the model, we propose the super stochastic gradient descent approach to update parameters by concealing the modulus length of gradient vectors and converting it or them into a unit vector. Furthermore, we analyze the security of super stochastic gradient descent approach and demonstrate that our algorithm can defend against the attacks on the gradient. Experiment results show that our approach is obviously superior to prevalent gradient descent approaches in terms of accuracy, robustness, and adaptability to large-scale batches. Interestingly, our algorithm can also resist model poisoning attacks to a certain extent.


2020 ◽  
Author(s):  
Japheth E. Gado ◽  
Gregg T. Beckham ◽  
Christina M. Payne

ABSTRACTAccurate prediction of the optimal catalytic temperature (Topt) of enzymes is vital in biotechnology, as enzymes with high Topt values are desired for enhanced reaction rates. Recently, a machine-learning method (TOME) for predicting Topt was developed. TOME was trained on a normally-distributed dataset with a median Topt of 37°C and less than five percent of Topt values above 85°C, limiting the method’s predictive capabilities for thermostable enzymes. Due to the distribution of the training data, the mean squared error on Topt values greater than 85°C is nearly an order of magnitude higher than the error on values between 30 and 50°C. In this study, we apply ensemble learning and resampling strategies that tackle the data imbalance to significantly decrease the error on high Topt values (>85°C) by 60% and increase the overall R2 value from 0.527 to 0.632. The revised method, TOMER, and the resampling strategies applied in this work are freely available to other researchers as a Python package on GitHub.


2020 ◽  
Vol 44 (2) ◽  
pp. 282-289
Author(s):  
I.M. Kulikovsvkikh

Previous research in deep learning indicates that iterations of the gradient descent, over separable data converge toward the L2 maximum margin solution. Even in the absence of explicit regularization, the decision boundary still changes even if the classification error on training is equal to zero. This feature of the so-called “implicit regularization” allows gradient methods to use more aggressive learning rates that result in substantial computational savings. However, even if the gradient descent method generalizes well, going toward the optimal solution, the rate of convergence to this solution is much slower than the rate of convergence of a loss function itself with a fixed step size. The present study puts forward the generalized logistic loss function that involves the optimization of hyperparameters, which results in a faster convergence rate while keeping the same regret bound as the gradient descent method. The results of computational experiments on MNIST and Fashion MNIST benchmark datasets for image classification proved the viability of the proposed approach to reducing computational costs and outlined directions for future research.


2011 ◽  
Vol 60 (2) ◽  
pp. 248-255 ◽  
Author(s):  
Sangmun Shin ◽  
Funda Samanlioglu ◽  
Byung Rae Cho ◽  
Margaret M. Wiecek

Sign in / Sign up

Export Citation Format

Share Document