scholarly journals Performance analysis of container-based networking solutions for high-performance computing cloud

Author(s):  
Sang Boem Lim ◽  
Joon Woo ◽  
Guohua Li

Recently, cloud service providers have been gradually changing from virtual machine-based cloud infrastructures to container-based cloud-native infrastructures that consider performance and workload-management issues. Several data network performance issues for virtual instances have arisen, and various networking solutions have been newly developed or utilized. In this paper, we propose a solution suitable for a high-performance computing (HPC) cloud through a performance comparison analysis of container-based networking solutions. We constructed a supercomputer-based test-bed cluster to evaluate the serviceability by executing HPC jobs.

2019 ◽  
Vol 8 (2S8) ◽  
pp. 1532-1535

The quantity of cloud the executives programming identified with a private foundation as-an administration cloud is expanding step by step. The highlights of the cloud the board programming shift altogether and this makes a trouble for the cloud customers to pick the product dependent on their business prerequisites. With the growing amounts of cloud service providers and the transfer of grids to the noisy worldview, the choice to use these new assets is essential. In addition, an enormous High Performance Computing (HPC) category of apps can operate these advantages without (or with minor) modifications .In this work we present the structure of a HPC middleware that can utilize assets originating from a situation that make out of numerous Clouds just as old style HPC assets. Utilizing the Diet middleware, we can convey an enormous scale, disseminated HPC stage that ranges over a huge pool of assets accumulated from various suppliers. At last, we approve the engineering idea through cosmological re-enactment Ramses.


Author(s):  
Adrian Jackson ◽  
Michèle Weiland

This chapter describes experiences using Cloud infrastructures for scientific computing, both for serial and parallel computing. Amazon’s High Performance Computing (HPC) Cloud computing resources were compared to traditional HPC resources to quantify performance as well as assessing the complexity and cost of using the Cloud. Furthermore, a shared Cloud infrastructure is compared to standard desktop resources for scientific simulations. Whilst this is only a small scale evaluation these Cloud offerings, it does allow some conclusions to be drawn, particularly that the Cloud can currently not match the parallel performance of dedicated HPC machines for large scale parallel programs but can match the serial performance of standard computing resources for serial and small scale parallel programs. Also, the shared Cloud infrastructure cannot match dedicated computing resources for low level benchmarks, although for an actual scientific code, performance is comparable.


Green computing is a contemporary research topic to address climate and energy challenges. In this chapter, the authors envision the duality of green computing with technological trends in other fields of computing such as High Performance Computing (HPC) and cloud computing on one hand and economy and business on the other hand. For instance, in order to provide electricity for large-scale cloud infrastructures and to reach exascale computing, we need huge amounts of energy. Thus, green computing is a challenge for the future of cloud computing and HPC. Alternatively, clouds and HPC provide solutions for green computing and climate change. In this chapter, the authors discuss this proposition by looking at the technology in detail.


2019 ◽  
Vol 214 ◽  
pp. 03024
Author(s):  
Vladimir Brik ◽  
David Schultz ◽  
Gonzalo Merino

Here we report IceCube’s first experiences of running GPU simulations on the Titan supercomputer. This undertaking was non-trivial because Titan is designed for High Performance Computing (HPC) workloads, whereas IceCube’s workloads fall under the High Throughput Computing (HTC) category. In particular: (i) Titan’s design, policies, and tools are geared heavily toward large MPI applications, while IceCube’s workloads consist of large numbers of relatively small independent jobs, (ii) Titan compute nodes run Cray Linux, which is not directly compatible with IceCube software, and (iii) Titan compute nodes cannot access outside networks, making it impossible to access IceCube’s CVMFS repositories and workload management systems. This report examines our experience of packaging our application in Singularity containers and using HTCondor as the second-level scheduler on the Titan supercomputer.


Sign in / Sign up

Export Citation Format

Share Document