KubeHICE: Performance-aware Container Orchestration on Heterogeneous-ISA Architectures in Cloud-Edge Platforms

Author(s):  
Saqing Yang ◽  
Yi Ren ◽  
Jianfeng Zhang ◽  
Jianbo Guan ◽  
Bao Li
Author(s):  
Prashanth Thinakaran ◽  
Jashwant Raj ◽  
Bikash Sharma ◽  
Mahmut T. Kandemir ◽  
Chita R. Das

2019 ◽  
Vol 214 ◽  
pp. 07019 ◽  
Author(s):  
Mayank Sharma ◽  
Maarten Litmaath ◽  
Eraldo Silva Junior ◽  
Renato Santana

This article describes a new framework, called SIMPLE, for settingup and maintaining classic WLCG sites with minimal operational efforts and insights needed into the WLCG middleware. The framework provides a single common interface to install and configure any of its supported grid services, such as Compute Elements, Batch Systems, Worker Nodes and miscellaneous middleware packages. It leverages modern container orchestration tools like Kubernetes, Docker Swarm, and confiuration management tools like Puppet, Ansible, to automate deployment of the WLCG services on behalf of a site admin. The framework is modular and extensible by design. Therefore, it is easy to add support for more grid services as well as infrastructure automation tools to accommodate diverse scenarios at different sites. We provide insight into the design of the framework and our efforts towards development, release and deployment of its first implementation featuring CREAM E, TORQUE Batch System and TORQUE based Worker Nodes.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4621
Author(s):  
Thanh-Tung Nguyen ◽  
Yu-Jin Yeom ◽  
Taehong Kim ◽  
Dae-Heon Park ◽  
Sehan Kim

Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of resource units, called pods, without having to restart the whole system. Kubernetes monitors default Resource Metrics including CPU and memory usage of host machines and their pods. On the other hand, Custom Metrics, provided by external software such as Prometheus, are customizable to monitor a wide collection of metrics. In this paper, we investigate HPA through diverse experiments to provide critical knowledge on its operational behaviors. We also discuss the essential difference between Kubernetes Resource Metrics (KRM) and Prometheus Custom Metrics (PCM) and how they affect HPA’s performance. Lastly, we provide deeper insights and lessons on how to optimize the performance of HPA for researchers, developers, and system administrators working with Kubernetes in the future.


Sign in / Sign up

Export Citation Format

Share Document