network pruning
Recently Published Documents


TOTAL DOCUMENTS

166
(FIVE YEARS 117)

H-INDEX

11
(FIVE YEARS 4)

2021 ◽  
Vol 144 ◽  
pp. 614-626
Author(s):  
Qingbei Guo ◽  
Xiao-Jun Wu ◽  
Josef Kittler ◽  
Zhiquan Feng

2021 ◽  
Author(s):  
Sho Enomoto ◽  
Yutaka Sasaki ◽  
Naoto Yorino ◽  
Yoshifumi Zoka ◽  
Mumbere Samuel Kihembo

Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2687
Author(s):  
Eun-Hun Lee ◽  
Hyeoncheol Kim

The significant advantage of deep neural networks is that the upper layer can capture the high-level features of data based on the information acquired from the lower layer by stacking layers deeply. Since it is challenging to interpret what knowledge the neural network has learned, various studies for explaining neural networks have emerged to overcome this problem. However, these studies generate the local explanation of a single instance rather than providing a generalized global interpretation of the neural network model itself. To overcome such drawbacks of the previous approaches, we propose the global interpretation method for the deep neural network through features of the model. We first analyzed the relationship between the input and hidden layers to represent the high-level features of the model, then interpreted the decision-making process of neural networks through high-level features. In addition, we applied network pruning techniques to make concise explanations and analyzed the effect of layer complexity on interpretability. We present experiments on the proposed approach using three different datasets and show that our approach could generate global explanations on deep neural network models with high accuracy and fidelity.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7074
Author(s):  
Chao-Ching Ho ◽  
Wei-Chi Chou ◽  
Eugene Su

This research is aimed to detect defects on the surface of the fabric and deep learning model optimization. Since defect detection cannot effectively solve the fabric with complex background by image processing, this research uses deep learning to identify defects. However, the current network architecture mainly focuses on natural images rather than the defect detection. As a result, the network architecture used for defect detection has more redundant neurons, which reduces the inference speed. In order to solve the above problems, we propose network pruning with the Bayesian optimization algorithm to automatically tune the network pruning parameters, and then retrain the network after pruning. The training and detection process uses the above-mentioned pruning network to predict the defect feature map, and then uses the image processing flow proposed in this research for the final judgment during fabric defect detection. The proposed method is verified in the two self-made datasets and the two public datasets. In the part of the proposed network optimization results, the Intersection over Union (IoU) of four datasets are dropped by 1.26%, 1.13%, 1.21%, and 2.15% compared to the original network model, but the inference time is reduced to 20.84%, 40.52%, 23.02%, and 23.33% of the original network model using Geforce 2080 Ti. Furthermore, the inference time is also reduced to 17.56%, 37.03%, 19.67%, and 22.26% using the embedded system AGX Xavier. After the image processing part, the accuracy of the four datasets can reach 92.75%, 94.87%, 95.6%, and 81.82%, respectively. In this research, Yolov4 is also trained with fabric defects, and the results showed this model are not conducive to detecting long and narrow fabric defects.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6601
Author(s):  
Linsong Shao ◽  
Haorui Zuo ◽  
Jianlin Zhang ◽  
Zhiyong Xu ◽  
Jinzhen Yao ◽  
...  

Neural network pruning, an important method to reduce the computational complexity of deep models, can be well applied to devices with limited resources. However, most current methods focus on some kind of information about the filter itself to prune the network, rarely exploring the relationship between the feature maps and the filters. In this paper, two novel pruning methods are proposed. First, a new pruning method is proposed, which reflects the importance of filters by exploring the information in the feature maps. Based on the premise that the more information there is, more important the feature map is, the information entropy of feature maps is used to measure information, which is used to evaluate the importance of each filter in the current layer. Further, normalization is used to realize cross layer comparison. As a result, based on the method mentioned above, the network structure is efficiently pruned while its performance is well reserved. Second, we proposed a parallel pruning method using the combination of our pruning method above and slimming pruning method which has better results in terms of computational cost. Our methods perform better in terms of accuracy, parameters, and FLOPs compared to most advanced methods. On ImageNet, it is achieved 72.02% top1 accuracy for ResNet50 with merely 11.41 M parameters and 1.12 B FLOPs.For DenseNet40, it is obtained 94.04% accuracy with only 0.38M parameters and 110.72M FLOPs on CIFAR10, and our parallel pruning method makes the parameters and FLOPs are just 0.37M and 100.12M, respectively, with little loss of accuracy.


2021 ◽  
Author(s):  
Ivan Lazarevich ◽  
Alexander Kozlov ◽  
Nikita Malinin

Sign in / Sign up

Export Citation Format

Share Document