scholarly journals Post-training deep neural network pruning via layer-wise calibration

Author(s):  
Ivan Lazarevich ◽  
Alexander Kozlov ◽  
Nikita Malinin
Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2687
Author(s):  
Eun-Hun Lee ◽  
Hyeoncheol Kim

The significant advantage of deep neural networks is that the upper layer can capture the high-level features of data based on the information acquired from the lower layer by stacking layers deeply. Since it is challenging to interpret what knowledge the neural network has learned, various studies for explaining neural networks have emerged to overcome this problem. However, these studies generate the local explanation of a single instance rather than providing a generalized global interpretation of the neural network model itself. To overcome such drawbacks of the previous approaches, we propose the global interpretation method for the deep neural network through features of the model. We first analyzed the relationship between the input and hidden layers to represent the high-level features of the model, then interpreted the decision-making process of neural networks through high-level features. In addition, we applied network pruning techniques to make concise explanations and analyzed the effect of layer complexity on interpretability. We present experiments on the proposed approach using three different datasets and show that our approach could generate global explanations on deep neural network models with high accuracy and fidelity.


Author(s):  
Li Li ◽  
Zhu Li ◽  
Yue Li ◽  
Birendra Kathariya ◽  
Shuvra Bhattacharyya

2021 ◽  
pp. 107899
Author(s):  
Seul-Ki Yeom ◽  
Philipp Seegerer ◽  
Sebastian Lapuschkin ◽  
Alexander Binder ◽  
Simon Wiedemann ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document