scholarly journals DeepQGHO: Quantized Greedy Hyperparameter Optimization in Deep Neural Networks for on-the-fly Learning

IEEE Access ◽  
2022 ◽  
pp. 1-1
Author(s):  
Anjir Ahmed Chowdhury ◽  
Md Abir Hossen ◽  
Md Ali Azam ◽  
Md Hafizur Rahman
2021 ◽  
Author(s):  
Anjir Ahmed Chowdhury ◽  
Md Abir Hossen ◽  
Md Ali Azam ◽  
Md. Hafizur Rahman

Abstract Hyperparameter optimization or tuning plays a significant role in the performance and reliability of deep learning (DL). Many hyperparameter optimization algorithms have been developed for obtaining better validation accuracy in DL training. Most state-of-the-art hyperparameters are computationally expensive due to a focus on validation accuracy. Therefore, they are unsuitable for online or on-the-fly training applications which require computational efficiency. In this paper, we develop a novel greedy approach-based hyperparameter optimization (GHO) algorithm for faster training applications, e.g., on-the-fly training. We perform an empirical study to compute the performance such as computation time and energy consumption of the GHO and compare it with two state-of-the-art hyperparameter optimization algorithms. We also deploy the GHO algorithm in an edge device to validate the performance of our algorithm. We perform post-training quantization to the GHO algorithm to reduce inference time and latency.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

Sign in / Sign up

Export Citation Format

Share Document