Layer-by-layer Quantization Method for Neural Network Parameters

Author(s):  
Jiali Ma ◽  
Zhiqiang Zhu ◽  
Leyu Dai ◽  
Songhui Guo
2018 ◽  
Vol 145 ◽  
pp. 488-494 ◽  
Author(s):  
Aleksandr Sboev ◽  
Alexey Serenko ◽  
Roman Rybka ◽  
Danila Vlasov ◽  
Andrey Filchenkov

2021 ◽  
Vol 2083 (3) ◽  
pp. 032010
Author(s):  
Rong Ma

Abstract The traditional BP neural network is difficult to achieve the target effect in the prediction of waterway cargo turnover. In order to improve the accuracy of waterway cargo turnover forecast, a waterway cargo turnover forecast model was created based on genetic algorithm to optimize neural network parameters. The genetic algorithm overcomes the trap that the general iterative method easily falls into, that is, the “endless loop” phenomenon that occurs when the local minimum is small, and the calculation time is small, and the robustness is high. Using genetic algorithm optimized BP neural network to predict waterway cargo turnover, and the empirical analysis of the waterway cargo turnover forecast is carried out. The results obtained show that the neural network waterway optimized by genetic algorithm has a higher accuracy than the traditional BP neural network for predicting waterway cargo turnover, and the optimization model can long-term analysis of the characteristics of waterway cargo turnover changes shows that the prediction effect is far better than traditional neural networks.


2008 ◽  
Vol 20 (11) ◽  
pp. 2757-2791 ◽  
Author(s):  
Yoshifusa Ito

We have constructed one-hidden-layer neural networks capable of approximating polynomials and their derivatives simultaneously. Generally, optimizing neural network parameters to be trained at later steps of the BP training is more difficult than optimizing those to be trained at the first step. Taking into account this fact, we suppressed the number of parameters of the former type. We measure degree of approximation in both the uniform norm on compact sets and the Lp-norm on the whole space with respect to probability measures.


2022 ◽  
pp. 202-226
Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


Sign in / Sign up

Export Citation Format

Share Document