pruning methods
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 33)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Kenneth J Pope ◽  
Trent W Lewis ◽  
Sean P Fitzgibbon ◽  
Azin S Janani ◽  
Tyler S Grummett ◽  
...  

Objective: In publications on the electroencephalographic (EEG) features of psychoses and other disorders, various methods are utilised to diminish electromyogram (EMG) contamination. The extent of residual EMG contamination using these methods has not been recognised. Here, we seek to emphasise the extent of residual EMG contamination of EEG. Methods: We compared scalp electrical recordings after applying different EMG-pruning methods with recordings of EMG-free data from 6 fully-paralysed healthy subjects. We calculated the ratio of the power of pruned, normal scalp electrical recordings in the 6 subjects, to the power of unpruned recordings in the same subjects when paralysed. We produced contamination graphs for different pruning methods. Results: EMG contamination exceeds EEG signals progressively more as frequencies exceed 25 Hz and with distance from the vertex. In contrast, Laplacian signals are spared in central scalp areas, even to 100 Hz. Conclusion: Given probable EMG contamination of EEG in psychiatric and other studies, few findings on beta- or gamma-frequency power can be relied upon. Based on the effectiveness of current methods of EEG de-contamination, investigators should be able to re-analyse recorded data, re-evaluate conclusions from high frequency EEG data and be aware of limitations of the methods.


Informatics ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 77
Author(s):  
Ali Alqahtani ◽  
Xianghua Xie ◽  
Mark W. Jones

Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maintaining their powerful performance. In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks. We consider not only pruning methods but also quantization methods, and low-rank factorization methods. This review also intends to clarify these major concepts, and highlights their characteristics, advantages, and shortcomings.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6601
Author(s):  
Linsong Shao ◽  
Haorui Zuo ◽  
Jianlin Zhang ◽  
Zhiyong Xu ◽  
Jinzhen Yao ◽  
...  

Neural network pruning, an important method to reduce the computational complexity of deep models, can be well applied to devices with limited resources. However, most current methods focus on some kind of information about the filter itself to prune the network, rarely exploring the relationship between the feature maps and the filters. In this paper, two novel pruning methods are proposed. First, a new pruning method is proposed, which reflects the importance of filters by exploring the information in the feature maps. Based on the premise that the more information there is, more important the feature map is, the information entropy of feature maps is used to measure information, which is used to evaluate the importance of each filter in the current layer. Further, normalization is used to realize cross layer comparison. As a result, based on the method mentioned above, the network structure is efficiently pruned while its performance is well reserved. Second, we proposed a parallel pruning method using the combination of our pruning method above and slimming pruning method which has better results in terms of computational cost. Our methods perform better in terms of accuracy, parameters, and FLOPs compared to most advanced methods. On ImageNet, it is achieved 72.02% top1 accuracy for ResNet50 with merely 11.41 M parameters and 1.12 B FLOPs.For DenseNet40, it is obtained 94.04% accuracy with only 0.38M parameters and 110.72M FLOPs on CIFAR10, and our parallel pruning method makes the parameters and FLOPs are just 0.37M and 100.12M, respectively, with little loss of accuracy.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
S. Lamptey ◽  
E. Koomson

Tomato is one of the most widely consumed and produced vegetables in Ghana. The low yield of tomatoes in Ghana has resulted in huge importation of the produce from neighboring countries. Good agronomic practices are among the key strategies involved in increasing the yield of horticultural produce. This study seeks to evaluate the response of staking and pruning on tomato fruit yield, quality, and cost of production. To achieve this, a field experiment was conducted to investigate the effect of staking and pruning methods on fruit yield and profitability of tomato (Solanum lycopersicum L.) produced in the northern region of Ghana. Treatments were applied in a randomized complete block design with three replications. Treatments were no pruning + no staking (control), single pole staking (SPS), wire trellis (WT), one-stem pruning (1SP), one-stem pruning + single pole staking (1SP + SPS), one-stem pruning + wire trellis (1SP + WT), two-stem pruning (2SP), two-stem pruning + single pole staking (2SP + SPS), and two-stem pruning + wire trellis (2SP + WT). Results showed that 2SP + WT increased fruit diameter, fruit length, and marketable fruit weight by 52%, 32%, and 69%, respectively, compared to the control. The maximum number and weight of marketable fruits obtained from 2SP + WT increased total fruit yield by 76% compared to the control. In all, the performance of the treatments in terms of yield was in the following order: 2SP + WT > 1SP + WT > SPS > WT > 2SP + SPS > 1SP > 2SP > control. Though 2SP + WT increased production cost by 42%, it greatly increased yield by 69% which resulted in 83% net profit compared to the control. Thus, 2SP + WT could be tested on-farm for possible adoption to increase tomato yield, quality, and profit.


2021 ◽  
Author(s):  
Kuo-Liang Chung ◽  
Yu-Lun Chang

Setting a fixed pruning rate and/or specified threshold for pruning filters in convolutional layers has been widely used to reduce the number of parameters required in the convolutional neural networks (CNN) model. However, it fails to fully prune redundant filters for different layers whose redundant filters vary with layers. To overcome this disadvantage, we propose a new backward filter pruning algorithm using a sorted bipartite graph- and binary search-based (SBGBS-based) clustering and decreasing pruning rate (DPR) approach. We first represent each filter of the last layer by a bipartite graph 𝐾1 𝑛, with one root mean set and one 𝑛-weight set, where 𝑛 denotes the number of weights in the filter. Next, according to the accuracy loss tolerance, an SBGBS-based clustering method is used to partition all filters into clusters as maximal as possible. Then, for each cluster, we retain the filter corresponding to the bipartite graph with the median root mean among 𝑛 root means in the cluster, but we discard the other filters in the same cluster. Following the DPR approach, we repeat the above SBGBS-based filtering pruning approach to the backward layer until all layers are processed. Based on the CIFAR-10 and MNIST datasets, the proposed filter pruning algorithm has been deployed into VGG-16, AlexNet, LeNet, and ResNet. With similar accuracy, the thorough experimental results have demonstrated the substantial parameters and floating-point operations reduction merits of our filter pruning algorithm relative to the existing filter pruning methods.


2021 ◽  
Author(s):  
Kuo-Liang Chung ◽  
Yu-Lun Chang

Setting a fixed pruning rate and/or specified threshold for pruning filters in convolutional layers has been widely used to reduce the number of parameters required in the convolutional neural networks (CNN) model. However, it fails to fully prune redundant filters for different layers whose redundant filters vary with layers. To overcome this disadvantage, we propose a new backward filter pruning algorithm using a sorted bipartite graph- and binary search-based (SBGBS-based) clustering and decreasing pruning rate (DPR) approach. We first represent each filter of the last layer by a bipartite graph 𝐾1 𝑛, with one root mean set and one 𝑛-weight set, where 𝑛 denotes the number of weights in the filter. Next, according to the accuracy loss tolerance, an SBGBS-based clustering method is used to partition all filters into clusters as maximal as possible. Then, for each cluster, we retain the filter corresponding to the bipartite graph with the median root mean among 𝑛 root means in the cluster, but we discard the other filters in the same cluster. Following the DPR approach, we repeat the above SBGBS-based filtering pruning approach to the backward layer until all layers are processed. Based on the CIFAR-10 and MNIST datasets, the proposed filter pruning algorithm has been deployed into VGG-16, AlexNet, LeNet, and ResNet. With similar accuracy, the thorough experimental results have demonstrated the substantial parameters and floating-point operations reduction merits of our filter pruning algorithm relative to the existing filter pruning methods.


Author(s):  
Edward Raff

K-Means++ and its distributed variant K-Means|| have become de facto tools for selecting the initial seeds of K-means. While alternatives have been developed, the effectiveness, ease of implementation,and theoretical grounding of the K-means++ and || methods have made them difficult to "best" from a holistic perspective. We focus on using triangle inequality based pruning methods to accelerate both of these algorithms to yield comparable or better run-time without sacrificing any of the benefits of these approaches. For both algorithms we are able to reduce distance computations by over 500×. For K-means++ this results in up to a 17×speedup in run-time and a551×speedup for K-means||. We achieve this with simple, but carefully chosen, modifications to known techniques which makes it easy to integrate our approach into existing implementations of these algorithms.


2021 ◽  
Vol 35 (3) ◽  
pp. 269-275
Author(s):  
Assinapol Ndereyimana ◽  
Bancy Waithila Waweru ◽  
Boniface Kagiraneza ◽  
Arstide Nshuti Niyokuri ◽  
Placide Rukundo ◽  
...  

This study was carried out to determine the effect of vine and fruit pruning on watermelon (Citrullus lanatus) yield. Five pruning methods: P1=no pruning at all, P2=pruning to four vines with two fruits per vine, P3=pruning to four vines with one fruit per vine, P4=pruning to three vines with two fruits per vine and P5=pruning to three vines with one fruit per vine were evaluated on two watermelon cultivars: ‘Sugar baby’ and ‘Julie F1’ under a factorial randomized complete block design with three replications. Investigations were carried out in the seasons 2017A (short rains) and 2017B (long rains) at Karama and Rubona experimental sites belonging to Rwanda Agriculture and Animal Resources Development Board. The obtained results indicated a significant difference among the different cultivars and pruning methods tested during both seasons and at two sites. Generally, all studied parameters recorded higher values during season 2017B than in season 2017A at Rubona site. A similar trend was recorded at Karama site except that the fruit yield per plant and per hectare for plants which were pruned to three vines with one fruit reduced during season 2017B as compared to season 2017A. The highest number of fruits per plant, fruit weight, fruit yield per plant and per hectare was recorded in ‘Julie F1’ compared to ‘Sugar baby’ at both sites and during both seasons. Higher fruit weight was obtained when both cultivars were pruned to three or four vines with one fruit per vine. Higher number of fruits per plant and higher fruit yield per plant was observed under pruning to four vines with two fruits per vine at Rubona site; while at Karama site, higher fruit yield per plant was recorded under pruning to three vines with one fruit or two fruits per vines and pruning to four vines with two fruits per vine. A similar trend was observed in fruit yield per hectare. Based on results of the current study, cultivation of the hybrid ‘Julie F1’ and pruning to three vines with one fruit per vine is recommended for optimum watermelon yield with big-sized fruits.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253241
Author(s):  
Hyun Dong Lee ◽  
Seongmin Lee ◽  
U. Kang

How can we effectively regularize BERT? Although BERT proves its effectiveness in various NLP tasks, it often overfits when there are only a small number of training instances. A promising direction to regularize BERT is based on pruning its attention heads with a proxy score for head importance. However, these methods are usually suboptimal since they resort to arbitrarily determined numbers of attention heads to be pruned and do not directly aim for the performance enhancement. In order to overcome such a limitation, we propose AUBER, an automated BERT regularization method, that leverages reinforcement learning to automatically prune the proper attention heads from BERT. We also minimize the model complexity and the action search space by proposing a low-dimensional state representation and dually-greedy approach for training. Experimental results show that AUBER outperforms existing pruning methods by achieving up to 9.58% better performance. In addition, the ablation study demonstrates the effectiveness of design choices for AUBER.


Symmetry ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1147
Author(s):  
Ernest Jeczmionek ◽  
Piotr A. Kowalski

The rapid growth of performance in the field of neural networks has also increased their sizes. Pruning methods are getting more and more attention in order to overcome the problem of non-impactful parameters and overgrowth of neurons. In this article, the application of Global Sensitivity Analysis (GSA) methods demonstrates the impact of input variables on the model’s output variables. GSA gives the ability to mark out the least meaningful arguments and build reduction algorithms on these. Using several popular datasets, the study shows how different levels of pruning correlate to network accuracy and how levels of reduction negligibly impact accuracy. In doing so, pre- and post-reduction sizes of neural networks are compared. This paper shows how Sobol and FAST methods with common norms can largely decrease the size of a network, while keeping accuracy relatively high. On the basis of the obtained results, it is possible to create a thesis about the asymmetry between the elements removed from the network topology and the quality of the neural network.


Sign in / Sign up

Export Citation Format

Share Document