parallel jobs
Recently Published Documents


TOTAL DOCUMENTS

161
(FIVE YEARS 29)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Vol 11 (1) ◽  
pp. 2-26
Author(s):  
Anne Benoit ◽  
Valentin Le Fèvre ◽  
Padma Raghavan ◽  
Yves Robert ◽  
Hongyang Sun
Keyword(s):  

2021 ◽  
pp. 1-1
Author(s):  
Anne Benoit ◽  
Valentin Le Fevre ◽  
Lucas Perotin ◽  
Padma Raghavan ◽  
Yves Robert ◽  
...  
Keyword(s):  

Author(s):  
Zhiyao Hu ◽  
Dongsheng Li ◽  
Dongxiang Zhang ◽  
Yiming Zhang ◽  
Baoyun Peng

Author(s):  
Benjamin Berg ◽  
Mor Harchol-Balter

Large data centers composed of many servers provide the opportunity to improve performance by parallelizing jobs. However, effectively exploiting parallelism is non-trivial. For each arriving job, one must decide the number of servers on which the job is run. The goal is to determine the optimal allocation of servers to jobs that minimizes the mean response time across jobs – the average time from when a job arrives until it completes. Parallelizing a job across multiple servers reduces the response time of that individual job. However, jobs receive diminishing returns from being allocated additional servers, so allocating too many servers to a single job leads to low system efficiency. The authors consider the case where the remaining sizes of jobs are unknown to the system at every moment in time. They prove that, if all jobs follow the same speedup function, the optimal policy is EQUI, which divides servers equally among jobs. When jobs follow different speedup functions, EQUI is no longer optimal and they provide an alternate policy, GREEDY*, which performs within 1% of optimal in simulation.


Sign in / Sign up

Export Citation Format

Share Document