Learning MRI k-Space Subsampling Pattern Using Progressive Weight Pruning

Author(s):  
Kai Xuan ◽  
Shanhui Sun ◽  
Zhong Xue ◽  
Qian Wang ◽  
Shu Liao
Keyword(s):  
2020 ◽  
Vol 34 (04) ◽  
pp. 6623-6630
Author(s):  
Li Yang ◽  
Zhezhi He ◽  
Deliang Fan

Deep convolutional neural network (DNN) has demonstrated phenomenal success and been widely used in many computer vision tasks. However, its enormous model size and high computing complexity prohibits its wide deployment into resource limited embedded system, such as FPGA and mGPU. As the two most widely adopted model compression techniques, weight pruning and quantization compress DNN model through introducing weight sparsity (i.e., forcing partial weights as zeros) and quantizing weights into limited bit-width values, respectively. Although there are works attempting to combine the weight pruning and quantization, we still observe disharmony between weight pruning and quantization, especially when more aggressive compression schemes (e.g., Structured pruning and low bit-width quantization) are used. In this work, taking FPGA as the test computing platform and Processing Elements (PE) as the basic parallel computing unit, we first propose a PE-wise structured pruning scheme, which introduces weight sparsification with considering of the architecture of PE. In addition, we integrate it with an optimized weight ternarization approach which quantizes weights into ternary values ({-1,0,+1}), thus converting the dominant convolution operations in DNN from multiplication-and-accumulation (MAC) to addition-only, as well as compressing the original model (from 32-bit floating point to 2-bit ternary representation) by at least 16 times. Then, we investigate and solve the coexistence issue between PE-wise Structured pruning and ternarization, through proposing a Weight Penalty Clipping (WPC) technique with self-adapting threshold. Our experiment shows that the fusion of our proposed techniques can achieve the best state-of-the-art ∼21× PE-wise structured compression rate with merely 1.74%/0.94% (top-1/top-5) accuracy degradation of ResNet-18 on ImageNet dataset.


2021 ◽  
Author(s):  
Yael Ben-Guigui ◽  
Jacob Goldberger ◽  
Tammy Riklin-Raviv

2021 ◽  
Author(s):  
Geng Yuan ◽  
Payman Behnam ◽  
Yuxuan Cai ◽  
Ali Shafiee ◽  
Jingyan Fu ◽  
...  
Keyword(s):  

2018 ◽  
Vol 5 (1) ◽  
pp. 136-146
Author(s):  
Supriadi Hartanto ◽  
Irsal ◽  
Asil Barus

This research was conducted to determine the growth of red sugarcane seedling on pruning and frequency of watering. This research was conducted in the research field of Faculty of Agriculture University of Sumatera Utara (± 25 m asl) from June-October 2017 using Factorial Randomized Block Design with three replications. The first factor was pruning (control, 1, and 2 months/plant) as well as the second factor was the frequency of watering (1, 3, 5, and 7 days/plants). The variable observed were plant height, a number of leaves, stem diameter, a number of the tiller, shoot wet weight, root wet weight, and shoot-root ratio. The results showed that the frequency of watering treatment had a significant effect on the number of a leaf (4,6,8,10 and 12 weeks after plant), stem diameter (4,6,8,10 and 12 weeks after plant), shoot wet weight, and root wet weight. Pruning treatment had no significant effect on all variables observed. The interaction of both has no significant effect on all variables observed.


2021 ◽  
Author(s):  
Tianyun Zhang ◽  
Xiaolong Ma ◽  
Zheng Zhan ◽  
Shanglin Zhou ◽  
Caiwen Ding ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document