Workload management: more than just job scheduling

Author(s):  
J.P. Jones ◽  
B. Nitzberg ◽  
B. Henderson
Author(s):  
Priyanka P ◽  
Deivanai K

Mainframe system processing includes a “Batch Cycle” that approximately spans in regular interval on a daily basis. The core part of the cycle completes in the middle of the regular interval with key client deliverables associated with the end times of certain jobs are tracked by service delivery. There are single and multi-client batch streams, a QA stream which includes all clients, and about huge batch jobs per day that execute. Despite a sophisticated job scheduling software and automated system workload management, operator intervention is required. The outcome of our proposed work is to bring out the high priority job first. According to our method, the jobs are re-prioritized the schedules so that prioritized jobs can get theavailable system resources. Furthermore, the characterization, analysis, and visualization of the reasons for a manual change in the schedule are to be considered. This work requires extensive data preprocessing and building machine learning models for the causal relationship between various system variables and the time of manual changes. 


2016 ◽  
Vol 3 (1) ◽  
pp. 30 ◽  
Author(s):  
NASEERA SHAIK ◽  
JYOTHEESWAI P ◽  
◽  

2010 ◽  
Vol 30 (8) ◽  
pp. 2066-2069
Author(s):  
Yong-cai TAO ◽  
Lei SHI

1990 ◽  
Author(s):  
DEPARTMENT OF THE ARMY WASHINGTON DC

2020 ◽  
Vol 15 ◽  
Author(s):  
Weiwen Zhang ◽  
Long Wang ◽  
Theint Theint Aye ◽  
Juniarto Samsudin ◽  
Yongqing Zhu

Background: Genotype imputation as a service is developed to enable researchers to estimate genotypes on haplotyped data without performing whole genome sequencing. However, genotype imputation is computation intensive and thus it remains a challenge to satisfy the high performance requirement of genome wide association study (GWAS). Objective: In this paper, we propose a high performance computing solution for genotype imputation on supercomputers to enhance its execution performance. Method: We design and implement a multi-level parallelization that includes job level, process level and thread level parallelization, enabled by job scheduling management, message passing interface (MPI) and OpenMP, respectively. It involves job distribution, chunk partition and execution, parallelized iteration for imputation and data concatenation. Due to the design of multi-level parallelization, we can exploit the multi-machine/multi-core architecture to improve the performance of genotype imputation. Results: Experiment results show that our proposed method can outperform the Hadoop-based implementation of genotype imputation. Moreover, we conduct the experiments on supercomputers to evaluate the performance of the proposed method. The evaluation shows that it can significantly shorten the execution time, thus improving the performance for genotype imputation. Conclusion: The proposed multi-level parallelization, when deployed as an imputation as a service, will facilitate bioinformatics researchers in Singapore to conduct genotype imputation and enhance the association study.


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4089
Author(s):  
Kaiqiang Zhang ◽  
Dongyang Ou ◽  
Congfeng Jiang ◽  
Yeliang Qiu ◽  
Longchuan Yan

In terms of power and energy consumption, DRAMs play a key role in a modern server system as well as processors. Although power-aware scheduling is based on the proportion of energy between DRAM and other components, when running memory-intensive applications, the energy consumption of the whole server system will be significantly affected by the non-energy proportion of DRAM. Furthermore, modern servers usually use NUMA architecture to replace the original SMP architecture to increase its memory bandwidth. It is of great significance to study the energy efficiency of these two different memory architectures. Therefore, in order to explore the power consumption characteristics of servers under memory-intensive workload, this paper evaluates the power consumption and performance of memory-intensive applications in different generations of real rack servers. Through analysis, we find that: (1) Workload intensity and concurrent execution threads affects server power consumption, but a fully utilized memory system may not necessarily bring good energy efficiency indicators. (2) Even if the memory system is not fully utilized, the memory capacity of each processor core has a significant impact on application performance and server power consumption. (3) When running memory-intensive applications, memory utilization is not always a good indicator of server power consumption. (4) The reasonable use of the NUMA architecture will improve the memory energy efficiency significantly. The experimental results show that reasonable use of NUMA architecture can improve memory efficiency by 16% compared with SMP architecture, while unreasonable use of NUMA architecture reduces memory efficiency by 13%. The findings we present in this paper provide useful insights and guidance for system designers and data center operators to help them in energy-efficiency-aware job scheduling and energy conservation.


Sign in / Sign up

Export Citation Format

Share Document