network compression
Recently Published Documents


TOTAL DOCUMENTS

193
(FIVE YEARS 134)

H-INDEX

12
(FIVE YEARS 6)

2022 ◽  
Vol 18 (2) ◽  
pp. 1-23
Author(s):  
Suraj Mishra ◽  
Danny Z. Chen ◽  
X. Sharon Hu

Compression is a standard procedure for making convolutional neural networks (CNNs) adhere to some specific computing resource constraints. However, searching for a compressed architecture typically involves a series of time-consuming training/validation experiments to determine a good compromise between network size and performance accuracy. To address this, we propose an image complexity-guided network compression technique for biomedical image segmentation. Given any resource constraints, our framework utilizes data complexity and network architecture to quickly estimate a compressed model which does not require network training. Specifically, we map the dataset complexity to the target network accuracy degradation caused by compression. Such mapping enables us to predict the final accuracy for different network sizes, based on the computed dataset complexity. Thus, one may choose a solution that meets both the network size and segmentation accuracy requirements. Finally, the mapping is used to determine the convolutional layer-wise multiplicative factor for generating a compressed network. We conduct experiments using 5 datasets, employing 3 commonly-used CNN architectures for biomedical image segmentation as representative networks. Our proposed framework is shown to be effective for generating compressed segmentation networks, retaining up to ≈95% of the full-sized network segmentation accuracy, and at the same time, utilizing ≈32x fewer network trainable weights (average reduction) of the full-sized networks.


2021 ◽  
Author(s):  
Chunyun Chen ◽  
Zhe Wang ◽  
Xiaowei Chen ◽  
Jie Lin ◽  
Mohamed M. Sabry Aly

2021 ◽  
Author(s):  
Junjie He ◽  
Yinzhang Ding ◽  
Ming Zhang ◽  
Dongxiao Li
Keyword(s):  

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2958
Author(s):  
Timotej Knez ◽  
Octavian Machidon ◽  
Veljko Pejović

Edge intelligence is currently facing several important challenges hindering its performance, with the major drawback being meeting the high resource requirements of deep learning by the resource-constrained edge computing devices. The most recent adaptive neural network compression techniques demonstrated, in theory, the potential to facilitate the flexible deployment of deep learning models in real-world applications. However, their actual suitability and performance in ubiquitous or edge computing applications has not, to this date, been evaluated. In this context, our work aims to bridge the gap between the theoretical resource savings promised by such approaches and the requirements of a real-world mobile application by introducing algorithms that dynamically guide the compression rate of a neural network according to the continuously changing context in which the mobile computation is taking place. Through an in-depth trace-based investigation, we confirm the feasibility of our adaptation algorithms in offering a scalable trade-off between the inference accuracy and resource usage. We then implement our approach on real-world edge devices and, through a human activity recognition application, confirm that it offers efficient neural network compression adaptation in highly dynamic environments. The results of our experiment with 21 participants show that, compared to using static network compression, our approach uses 2.18× less energy with only a 1.5% drop in the average accuracy of the classification.


Informatics ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 77
Author(s):  
Ali Alqahtani ◽  
Xianghua Xie ◽  
Mark W. Jones

Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maintaining their powerful performance. In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks. We consider not only pruning methods but also quantization methods, and low-rank factorization methods. This review also intends to clarify these major concepts, and highlights their characteristics, advantages, and shortcomings.


2021 ◽  
Author(s):  
Yingling Li ◽  
Zhipeng Li ◽  
Tianxing Zhang ◽  
Peng Zhou ◽  
Siyin Feng ◽  
...  

2021 ◽  
Author(s):  
Jianming Ye ◽  
Shiliang Zhang ◽  
Jingdong Wang

Sign in / Sign up

Export Citation Format

Share Document