Recurrent neural network pruning using dynamical systems and iterative fine-tuning

2021 ◽  
Author(s):  
Christos Chatzikonstantinou ◽  
Dimitrios Konstantinidis ◽  
Kosmas Dimitropoulos ◽  
Petros Daras
1998 ◽  
Vol 9 (4) ◽  
pp. 531-547 ◽  
Author(s):  
Coryn A L Bailer-Jones ◽  
David J C MacKay ◽  
Philip J Withers

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5901
Author(s):  
Tao Wu ◽  
Jiao Shi ◽  
Deyun Zhou ◽  
Xiaolong Zheng ◽  
Na Li

Deep neural networks have achieved significant development and wide applications for their amazing performance. However, their complex structure, high computation and storage resource limit their applications in mobile or embedding devices such as sensor platforms. Neural network pruning is an efficient way to design a lightweight model from a well-trained complex deep neural network. In this paper, we propose an evolutionary multi-objective one-shot filter pruning method for designing a lightweight convolutional neural network. Firstly, unlike some famous iterative pruning methods, a one-shot pruning framework only needs to perform filter pruning and model fine-tuning once. Moreover, we built a constraint multi-objective filter pruning problem in which two objectives represent the filter pruning ratio and the accuracy of the pruned convolutional neural network, respectively. A non-dominated sorting-based evolutionary multi-objective algorithm was used to solve the filter pruning problem, and it provides a set of Pareto solutions which consists of a series of different trade-off pruned models. Finally, some models are uniformly selected from the set of Pareto solutions to be fine-tuned as the output of our method. The effectiveness of our method was demonstrated in experimental studies on four designed models, LeNet and AlexNet. Our method can prune over 85%, 82%, 75%, 65%, 91% and 68% filters with little accuracy loss on four designed models, LeNet and AlexNet, respectively.


Robotics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 29
Author(s):  
Christian Dengler ◽  
Boris Lohmann

In this contribution, we develop a feedback controller in the form of a parametric function for a mobile inverted pendulum. The control both stabilizes the system and drives it to target positions with target orientations. A design of the controller based only on a cost function is difficult for this task, which is why we choose to train the controller using imitation learning on optimized trajectories. In contrast to popular approaches like policy gradient methods, this approach allows us to shape the behavior of the system by including equality constraints. When transferring the parametric controller from simulation to the real mobile inverted pendulum, the control performance is degraded due to the reality gap. A robust control design can reduce the degradation. However, for the framework of imitation learning on optimized trajectories, methods that explicitly consider robustness do not yet exist to the knowledge of the authors. We tackle this research gap by presenting a method to design a robust controller in the form of a recurrent neural network, to improve the transferability of the trained controller to the real system. As a last step, we make the behavior of the parametric controller adjustable to allow for the fine tuning of the behavior of the real system. We design the controller for our system and show in the application that the recurrent neural network has increased performance compared to a static neural network without robustness considerations.


2019 ◽  
Vol 29 (12) ◽  
pp. 123115 ◽  
Author(s):  
Aleksei Seleznev ◽  
Dmitry Mukhin ◽  
Andrey Gavrilov ◽  
Evgeny Loskutov ◽  
Alexander Feigin

Sign in / Sign up

Export Citation Format

Share Document