pruning method
Recently Published Documents


TOTAL DOCUMENTS

169
(FIVE YEARS 57)

H-INDEX

13
(FIVE YEARS 2)

Processes ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 44
Author(s):  
Yuan Liu ◽  
Takahiro Kawaguchi ◽  
Song Xu ◽  
Seiji Hashimoto

Recurrent Neural Networks (RNNs) have been widely applied in various fields. However, in real-world application, because most devices like mobile phones are limited to the storage capacity when processing real-time information, an over-parameterized model always slows down the system speed and is not suitable to be employed. In our proposed temperature control system, the RNN-based control model processes the real-time temperature signals. It is necessary to compress the trained model with acceptable loss of control performance for further implementation in the actual controller when the system resource is limited. Inspired by the layer-wise neuron pruning method, in this paper, we apply the nonlinear reconstruction error (NRE) guided layer-wise weight pruning method on the RNN-based temperature control system. The control system is established based on MATLAB/Simulink. In order to compress the model size to save the memory capacity of temperature controller devices, we first prove the validity of the proposed reference-model (ref-model) guided RNN model for real-time online data processing on an actual temperature object; relative experiments are implemented based on a digital signal processor. On this basis, we then verified the NRE guided layer-wise weight pruning method on the well-trained temperature control model. Compared with the classical pruning method, experiment results indicate that the pruned control model based on NRE guided layer-wise weight pruning can effectively achieve the high accuracy at targeted sparsity of the network.


2021 ◽  
pp. 1-15
Author(s):  
Mingyang Wang ◽  
Tianyi Sun ◽  
Kang Song ◽  
Shuang Li ◽  
Jing Jiang ◽  
...  

2021 ◽  
pp. 1-21
Author(s):  
Andrei C. Apostol ◽  
Maarten C. Stol ◽  
Patrick Forré

We propose a novel pruning method which uses the oscillations around 0, i.e. sign flips, that a weight has undergone during training in order to determine its saliency. Our method can perform pruning before the network has converged, requires little tuning effort due to having good default values for its hyperparameters, and can directly target the level of sparsity desired by the user. Our experiments, performed on a variety of object classification architectures, show that it is competitive with existing methods and achieves state-of-the-art performance for levels of sparsity of 99.6 % and above for 2 out of 3 of the architectures tested. Moreover, we demonstrate that our method is compatible with quantization, another model compression technique. For reproducibility, we release our code at https://github.com/AndreiXYZ/flipout.


2021 ◽  
Vol 464 ◽  
pp. 533-545
Author(s):  
Wenxiao Wang ◽  
Zhengxu Yu ◽  
Cong Fu ◽  
Deng Cai ◽  
Xiaofei He
Keyword(s):  

2021 ◽  
Author(s):  
Thiago Peixoto Leal ◽  
Vinicius C Furlan ◽  
Mateus Henrique Gouveia ◽  
Julia Maria Saraiva Duarte ◽  
Pablo AS Fonseca ◽  
...  

Genetic and omics analyses frequently require independent observations, which is not guaranteed in real datasets. When relatedness can not be accounted for, solutions involve removing related individuals (or observations) and, consequently, a reduction of available data. We developed a network-based relatedness-pruning method that minimizes dataset reduction while removing unwanted relationships in a dataset. It uses node degree centrality metric to identify highly connected nodes (or individuals) and implements heuristics that approximate the minimal reduction of a dataset to allow its application to large datasets. NAToRA outperformed two popular methodologies (implemented in software PLINK and KING) by showing the best combination of effective relatedness-pruning, removing all relatives while keeping the largest possible number of individuals in all datasets tested and also, with similar or lesser reduction in genetic diversity. NAToRA is freely available, both as a standalone tool that can be easily incorporated as part of a pipeline, and as a graphical web tool that allows visualization of the relatedness networks. NAToRA also accepts a variety of relationship metrics as input, which facilitates its use. We also present a genealogies simulator software used for different tests performed in the manuscript.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6601
Author(s):  
Linsong Shao ◽  
Haorui Zuo ◽  
Jianlin Zhang ◽  
Zhiyong Xu ◽  
Jinzhen Yao ◽  
...  

Neural network pruning, an important method to reduce the computational complexity of deep models, can be well applied to devices with limited resources. However, most current methods focus on some kind of information about the filter itself to prune the network, rarely exploring the relationship between the feature maps and the filters. In this paper, two novel pruning methods are proposed. First, a new pruning method is proposed, which reflects the importance of filters by exploring the information in the feature maps. Based on the premise that the more information there is, more important the feature map is, the information entropy of feature maps is used to measure information, which is used to evaluate the importance of each filter in the current layer. Further, normalization is used to realize cross layer comparison. As a result, based on the method mentioned above, the network structure is efficiently pruned while its performance is well reserved. Second, we proposed a parallel pruning method using the combination of our pruning method above and slimming pruning method which has better results in terms of computational cost. Our methods perform better in terms of accuracy, parameters, and FLOPs compared to most advanced methods. On ImageNet, it is achieved 72.02% top1 accuracy for ResNet50 with merely 11.41 M parameters and 1.12 B FLOPs.For DenseNet40, it is obtained 94.04% accuracy with only 0.38M parameters and 110.72M FLOPs on CIFAR10, and our parallel pruning method makes the parameters and FLOPs are just 0.37M and 100.12M, respectively, with little loss of accuracy.


Author(s):  
Achmad Naufal Wijaya Jofanda ◽  
Mohamad Yasin

Checkers is a board game that is played by two people which has a purpose to defeat the opponent by eating all the opponent's pieces or making the opponent unable to make a move. The sophistication of technology at this modern time makes the checkers game can be used on a computer even with a smartphone. The application of artificial intelligence in checkers games makes the game playable anywhere and anytime. Alpha Beta Pruning is an optimization technique from the Minimax Algorithm that can reduce the number of branch/node extensions to get better and faster step search results. In this study, a checkers game based on artificial intelligence will be developed using the alpha-beta pruning method. This research is expected to explain in detail how artificial intelligence works in a game. Alpha-beta pruning was chosen because it can search for the best steps quickly and precisely. This study tested 10 respondents to play this game. The results show that the player's win rate was 60% at the easy level, 40% at the medium level, and 20% at the hard level. Besides that, the level of interest in this game was 80% being entertained and 20% feeling ordinary.


Sign in / Sign up

Export Citation Format

Share Document