scholarly journals A Manifestation of Cloud Computing Environments for Application Clusters, High-Performance Clusters and Allocation strategies for Virtual Machines

Author(s):  
Kranthi Kumar. K ◽  
R. Rindha Reddy ◽  
Kurumaddali Sushmitha

Cloud Computing (CC) is the advancement of the Grid Computing (GC) worldview in the direction of administration arranged structures. The phrasing connected to this sort of handling, while portraying shared resources, alludes to the idea of Service of X. Such assets are accessible on interest and at an altogether low cost contrasted with self-conveyance of individual segments. CC is found everywhere in current situations, from vast scale associations to a just little scale business, everybody is equipping themselves cloud. Due to its effortlessness, observing and support over remote association, expansive territory inclusion. Cloud can be any sort Software as an administration, stage as an administration, foundation as an administration dependent on its use. High Performance Computing (HIPECO) implies the accumulation of computational capacity to build the capacity of handling substantial issues in science, designing, and business. HIPECO on the cloud permits performing on interest HIPECO errands by superior clusters in a cloud atmosphere. Currently, CC arrangements (e.g., Microsoft Azure, Amazon EC2) enable the users to make use of only the fundamental storage and computational utilities. They prevent the allowance of custom adjustments of the topology designs or parameters of the system. The associations structures of the nodes in HIPECO clusters ought to give a quick bury node correspondence. It is vital that adaptability is safeguarded also. In a foundation, as an administration, virtualization viably maps virtual machines to the physical machines. In spite of the fact that it is difficult, undertaking for hypervisor to choose fitting host to serve up and coming virtual machine is a must requirement. In this paper, our main aim is to examine different techniques/types of cluster topology mapping and their necessities in numerous Cloud situations to accomplish higher dependability along with adaptability of utilization which is executed inside Cloud resources (CR), HIPECO resource allocation (RA) on the cloud clusters and Cluster based designation procedure.

2016 ◽  
Vol 31 (6) ◽  
pp. 1985-1996 ◽  
Author(s):  
David Siuta ◽  
Gregory West ◽  
Henryk Modzelewski ◽  
Roland Schigas ◽  
Roland Stull

Abstract As cloud-service providers like Google, Amazon, and Microsoft decrease costs and increase performance, numerical weather prediction (NWP) in the cloud will become a reality not only for research use but for real-time use as well. The performance of the Weather Research and Forecasting (WRF) Model on the Google Cloud Platform is tested and configurations and optimizations of virtual machines that meet two main requirements of real-time NWP are found: 1) fast forecast completion (timeliness) and 2) economic cost effectiveness when compared with traditional on-premise high-performance computing hardware. Optimum performance was found by using the Intel compiler collection with no more than eight virtual CPUs per virtual machine. Using these configurations, real-time NWP on the Google Cloud Platform is found to be economically competitive when compared with the purchase of local high-performance computing hardware for NWP needs. Cloud-computing services are becoming viable alternatives to on-premise compute clusters for some applications.


Author(s):  
Naweiluo Zhou ◽  
Yiannis Georgiou ◽  
Marcin Pospieszny ◽  
Li Zhong ◽  
Huan Zhou ◽  
...  

AbstractContainerisation demonstrates its efficiency in application deployment in Cloud Computing. Containers can encapsulate complex programs with their dependencies in isolated environments making applications more portable, hence are being adopted in High Performance Computing (HPC) clusters. Singularity, initially designed for HPC systems, has become their de facto standard container runtime. Nevertheless, conventional HPC workload managers lack micro-service support and deeply-integrated container management, as opposed to container orchestrators. We introduce a Torque-Operator which serves as a bridge between HPC workload manager (TORQUE) and container orchestrator (Kubernetes). We propose a hybrid architecture that integrates HPC and Cloud clusters seamlessly with little interference to HPC systems where container orchestration is performed on two levels.


2012 ◽  
Vol 10 (H16) ◽  
pp. 679-680
Author(s):  
Christopher J. Fluke

AbstractAs we move ever closer to the Square Kilometre Array era, support for real-time, interactive visualisation and analysis of tera-scale (and beyond) data cubes will be crucial for on-going knowledge discovery. However, the data-on-the-desktop approach to analysis and visualisation that most astronomers are comfortable with will no longer be feasible: tera-scale data volumes exceed the memory and processing capabilities of standard desktop computing environments. Instead, there will be an increasing need for astronomers to utilise remote high performance computing (HPC) resources. In recent years, the graphics processing unit (GPU) has emerged as a credible, low cost option for HPC. A growing number of supercomputing centres are now investing heavily in GPU technologies to provide O(100) Teraflop/s processing. I describe how a GPU-powered computing cluster allows us to overcome the analysis and visualisation challenges of tera-scale data. With a GPU-based architecture, we have moved the bottleneck from processing-limited to bandwidth-limited, achieving exceptional real-time performance for common visualisation and data analysis tasks.


Author(s):  
Umar Ibrahim Minhas ◽  
Roger Woods ◽  
Georgios Karakonstantis

AbstractWhilst FPGAs have been used in cloud ecosystems, it is still extremely challenging to achieve high compute density when mapping heterogeneous multi-tasks on shared resources at runtime. This work addresses this by treating the FPGA resource as a service and employing multi-task processing at the high level, design space exploration and static off-line partitioning in order to allow more efficient mapping of heterogeneous tasks onto the FPGA. In addition, a new, comprehensive runtime functional simulator is used to evaluate the effect of various spatial and temporal constraints on both the existing and new approaches when varying system design parameters. A comprehensive suite of real high performance computing tasks was implemented on a Nallatech 385 FPGA card and show that our approach can provide on average 2.9 × and 2.3 × higher system throughput for compute and mixed intensity tasks, while 0.2 × lower for memory intensive tasks due to external memory access latency and bandwidth limitations. The work has been extended by introducing a novel scheduling scheme to enhance temporal utilization of resources when using the proposed approach. Additional results for large queues of mixed intensity tasks (compute and memory) show that the proposed partitioning and scheduling approach can provide higher than 3 × system speedup over previous schemes.


Sign in / Sign up

Export Citation Format

Share Document