Challenges and Opportunities in High Performance Cloud Computing

Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.

Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


2021 ◽  
pp. 53-66
Author(s):  
O.V. Abramov ◽  
◽  
D.A. Nazarov ◽  

The article deals with the application of cloud computing to solve the problems of construction and analysis of engineering system acceptability regions (AR). It is claimed that the cloud plat-form is the most appropriate architecture for consolidating high-performance computing re-sources, modeling software and data warehouses in multi-user mode. The main components of the cloud environment for (AR) construction and analysis within the framework of the SaaS and DaaS cloud service models are discussed.


Author(s):  
Herbert Cornelius

For decades, HPC has established itself as an essential tool for discoveries, innovations and new insights in science, research and development, engineering and business across a wide range of application areas in academia and industry. Today High-Performance Computing is also well recognized to be of strategic and economic value – HPC matters and is transforming industries. This article will discuss new emerging technologies that are being developed for all areas of HPC: compute/processing, memory and storage, interconnect fabric, I/O and software to address the ongoing challenges in HPC such as balanced architecture, energy efficient high-performance, density, reliability, sustainability, and last but not least ease-of-use. Of specific interest are the challenges and opportunities for the next frontier in HPC envisioned around the 2020 timeframe: ExaFlops computing. We will also outline the new and emerging area of High Performance Data Analytics, Big Data Analytics using HPC, and discuss the emerging new delivery mechanism for HPC - HPC in the Cloud.


2019 ◽  
Vol 214 ◽  
pp. 07012 ◽  
Author(s):  
Nikita Balashov ◽  
Maxim Bashashin ◽  
Pavel Goncharov ◽  
Ruslan Kuchumov ◽  
Nikolay Kutovskiy ◽  
...  

Cloud computing has become a routine tool for scientists in many fields. The JINR cloud infrastructure provides JINR users with computational resources to perform various scientific calculations. In order to speed up achievements of scientific results the JINR cloud service for parallel applications has been developed. It consists of several components and implements a flexible and modular architecture which allows to utilize both more applications and various types of resources as computational backends. An example of using the Cloud&HybriLIT resources in scientific computing is the study of superconducting processes in the stacked long Josephson junctions (LJJ). The LJJ systems have undergone intensive research because of the perspective of practical applications in nano-electronics and quantum computing. In this contribution we generalize the experience in application of the Cloud&HybriLIT resources for high performance computing of physical characteristics in the LJJ system.


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1532-1535

The quantity of cloud the executives programming identified with a private foundation as-an administration cloud is expanding step by step. The highlights of the cloud the board programming shift altogether and this makes a trouble for the cloud customers to pick the product dependent on their business prerequisites. With the growing amounts of cloud service providers and the transfer of grids to the noisy worldview, the choice to use these new assets is essential. In addition, an enormous High Performance Computing (HPC) category of apps can operate these advantages without (or with minor) modifications .In this work we present the structure of a HPC middleware that can utilize assets originating from a situation that make out of numerous Clouds just as old style HPC assets. Utilizing the Diet middleware, we can convey an enormous scale, disseminated HPC stage that ranges over a huge pool of assets accumulated from various suppliers. At last, we approve the engineering idea through cosmological re-enactment Ramses.


The utilization of distributed computing server farm is developing quickly to fulfill the large increment required for systems administration, High-Performance Computing(HPC) as well as stockpiling assets for executing business and logical applications. The process of Virtual Machine (VM) solidification is inclusive of VMs getting relocated in order to make use of less physical servers. As a result, it enables the shut down or lowpower mode of more number of servers which enhances the vitality utilization effectiveness, working expense and CO2 discharge. An urgent advance in VM union is have over-burden discovery, which endeavors to foresee whether a physical server is going to be oversubscribed with VMs. On the contrary to usual studies which performed utilization of CPU being the standalone indicator for host overload, a multiple correlation host overload detection algorithm was proposed in the recent study by considering a lot of factors in this regard. A higher load balance model was introduced in this text for the general public cloud, supported by the concept of cloud partitioning, in addition to a switch mechanism used to strategize differently under different scenarios. The IP address is generally shared by a true server and carbo balance. In this regard, the load balancer considers the interface developed with IP address which accepts request packets and the packets are directed to the selected servers. With an aim to improve the efficiency in public cloud environment, the algorithm employed the sport theory in the load balancing strategy.


Sign in / Sign up

Export Citation Format

Share Document