scholarly journals Applications of on-demand virtual clusters to high performance computing

2015 ◽  
Vol 7 (3) ◽  
pp. 511-516
Author(s):  
I. G. Gankevich ◽  
S. G. Balyan ◽  
S. A. Abrahamyan ◽  
V. V. Korkhov
Author(s):  
Mohammad Samadi Gharajeh

Grid systems and cloud servers are two distributed networks that deliver computing resources (e.g., file storages) to users' services via a large and often global network of computers. Virtualization technology can enhance the efficiency of these networks by dedicating the available resources to multiple execution environments. This chapter describes applications of virtualization technology in grid systems and cloud servers. It presents different aspects of virtualized networks in systematic and teaching issues. Virtual machine abstraction virtualizes high-performance computing environments to increase the service quality. Besides, grid virtualization engine and virtual clusters are used in grid systems to accomplish users' services in virtualized environments, efficiently. The chapter, also, explains various virtualization technologies in cloud severs. The evaluation results analyze performance rate of the high-performance computing and virtualized grid systems in terms of bandwidth, latency, number of nodes, and throughput.


2020 ◽  
Vol 14 ◽  

Typically, the constant changes in computers and communications technology led to the need of on-demand network access to a shared computing resources to reduce cost and time and this is known as Cloud computing, which delivers computing services to users as a pay-as-you-go manner by emerging several distributed and high performance computing concepts. The cloud makes reaching any information or source possible from anywhere eliminating the setup and instillation step such that the user and the hardware may co-exist in different places. This comes beneficial for the users or the small companies that cannot effort to pay for the hardware, storage or resources as the big companies. Many of the studies on cloud computing was dedicated to the performance efficiency of task scheduling. Scheduling is a wide concept and it is one of the most important issues that generally work on mapping tasks to appropriate resources efficiently and effectively using one or more strategy. This paper have reviewed and classified the most recent scheduling algorithms in cloud computing and gave examples on each.


2020 ◽  
Vol 245 ◽  
pp. 09011
Author(s):  
Michael Hildreth ◽  
Kenyi Paolo Hurtado Anampa ◽  
Cody Kankel ◽  
Scott Hampton ◽  
Paul Brenner ◽  
...  

The NSF-funded Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) built on top of existing CI elements. Specifically, the project has extended the CERN-based REANA framework, a cloud-based data analysis platform deployed on top of Kubernetes clusters that was originally designed to enable analysis reusability and reproducibility. REANA is capable of orchestrating extremely complicated multi-step workflows, and uses Kubernetes clusters both for scheduling and distributing container-based workloads across a cluster of available machines, as well as instantiating and monitoring the concrete workloads themselves. This work describes the challenges and development efforts involved in extending REANA and the components that were developed in order to enable large scale deployment on High Performance Computing (HPC) resources. Using the Virtual Clusters for Community Computation (VC3) infrastructure as a starting point, we implemented REANA to work with a number of differing workload managers, including both high performance and high throughput, while simultaneously removing REANA’s dependence on Kubernetes support at the workers level.


Sign in / Sign up

Export Citation Format

Share Document