Hyper-Parameter Optimization for Deep Learning by Surrogate-based Model with Weighted Distance Exploration

Author(s):  
Zhenhua Li ◽  
Christine A. Shoemaker

A framework to perform video examination is proposed utilizing a powerfully tuned convolutional arrange. Recordings are gotten from distributed storage, preprocessed, and a model for supporting order is created on these video streams utilizing cloud-based framework. A key spotlight in this paper is on tuning hyper-parameters related with the profound learning calculation used to build the model. We further propose a programmed video object order pipeline to approve the framework. The scientific model used to help hyper-parameter tuning improves execution of the proposed pipeline, and results of different parameters on framework's presentation is analyzed. Along these lines, the parameters that contribute toward the most ideal presentation are chosen for the video object order pipeline. Our examination based approval uncovers an exactness and accuracy of 97% and 96%, separately. The framework demonstrated to be adaptable, strong, and adjustable for a wide range of utilizations.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 350
Author(s):  
Jaewon Son ◽  
Yonghyuk Yoo ◽  
Khu-rai Kim ◽  
Youngjae Kim ◽  
Kwonyong Lee ◽  
...  

This paper proposes Hermes, a container-based preemptive GPU scheduling framework for accelerating hyper-parameter optimization in deep learning (DL) clusters. Hermes accelerates hyper-parameter optimization by time-sharing between DL jobs and prioritizing jobs with more promising hyper-parameter combinations. Hermes’s scheduling policy is grounded on the observation that good hyper-parameter combinations converge quickly in the early phases of training. By giving higher priority to fast-converging containers, Hermes’s GPU preemption mechanism can accelerate training. This enables users to find optimal hyper-parameters faster without losing the progress of a container. We have implemented Hermes over Kubernetes and compared its performance against existing scheduling frameworks. Experiments show that Hermes reduces the time for hyper-parameter optimization up to 4.04 times against previously proposed scheduling policies such as FIFO, round-robin (RR), and SLAQ, with minimal time-sharing overhead.


2019 ◽  
Vol 49 (1) ◽  
pp. 253-264 ◽  
Author(s):  
Muhammad Usman Yaseen ◽  
Ashiq Anjum ◽  
Omer Rana ◽  
Nikolaos Antonopoulos

2021 ◽  
pp. 57-67
Author(s):  
Pranjal Sahu ◽  
Hailiang Huang ◽  
Wei Zhao ◽  
Hong Qin

Sign in / Sign up

Export Citation Format

Share Document