AdaPrune: An Accelerator-aware Pruning Technique for Sustainable CNN Accelerators

Author(s):  
Jiajun Li ◽  
Ahmed Louri
Keyword(s):  
2007 ◽  
Vol 16 (06) ◽  
pp. 1093-1113 ◽  
Author(s):  
N. S. THOMAIDIS ◽  
V. S. TZASTOUDIS ◽  
G. D. DOUNIAS

This paper compares a number of neural network model selection approaches on the basis of pricing S&P 500 stock index options. For the choice of the optimal architecture of the neural network, we experiment with a “top-down” pruning technique as well as two “bottom-up” strategies that start with simple models and gradually complicate the architecture if data indicate so. We adopt methods that base model selection on statistical hypothesis testing and information criteria and we compare their performance to a simple heuristic pruning technique. In the first set of experiments, neural network models are employed to fit the entire options surface and in the second they are used as parts of a hybrid intelligence scheme that combines a neural network model with theoretical option-pricing hints.


Author(s):  
Hayder Naser Khraibet Al-Behadili ◽  
Ku Ruhana Ku-Mahamud ◽  
Rafid Sagban

2008 ◽  
Vol 18 (05) ◽  
pp. 389-403 ◽  
Author(s):  
THOMAS D. JORGENSEN ◽  
BARRY P. HAYNES ◽  
CHARLOTTE C. F. NORLUND

This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.


2020 ◽  
Vol 19 ◽  
pp. 100323
Author(s):  
Nalakkhana Khitmoh ◽  
Sucha Smanchat ◽  
Sissades Tongsima

Sign in / Sign up

Export Citation Format

Share Document