Hierarchical Load Balancing Model by Optimal Resource Utilization

Author(s):  
Jagdish Chandra Patni

Powerful computational capabilities and resource availability at a low cost is the utmost demand for high performance computing. The resources for computing can viewed as the edges of an interconnected grid. It can attain the capabilities of grid computing by balancing the load at various levels. Since the nature of resources are heterogeneous and distributed geographically, the grid computing paradigm in its original form cannot be used to meet the requirements, so it can use the capabilities of the cloud and other technologies to achieve the goal. Resource heterogeneity makes grid computing more dynamic and challenging. Therefore, in this article the problem of scalability, heterogeneity and adaptability of grid computing is discussed with a perspective of providing high computing, load balancing and availability of resources.

2019 ◽  
Vol 6 (3) ◽  
pp. 29-42 ◽  
Author(s):  
Jagdish Chandra Patni

Powerful computational capabilities and resource availability at a low cost is the utmost demand for high performance computing. The resources for computing can viewed as the edges of an interconnected grid. It can attain the capabilities of grid computing by balancing the load at various levels. Since the nature of resources are heterogeneous and distributed geographically, the grid computing paradigm in its original form cannot be used to meet the requirements, so it can use the capabilities of the cloud and other technologies to achieve the goal. Resource heterogeneity makes grid computing more dynamic and challenging. Therefore, in this article the problem of scalability, heterogeneity and adaptability of grid computing is discussed with a perspective of providing high computing, load balancing and availability of resources.


2013 ◽  
Vol 9 (3) ◽  
pp. 1091-1098 ◽  
Author(s):  
Sukalyan Goswami ◽  
Ajanta De Sarkar

Grid computing or computational grid has become a vast research field in academics. It is a promising platform that provides resource sharing through multi-institutional virtual organizations for dynamic problem solving. Such platforms are much more cost-effective than traditional high performance computing systems. Due to the provision of scalability of resources, these days grid computing has become popular in industry as well. However, computational grid has different constraints and requirements to those of traditional high performance computing systems. In order to fully exploit such grid systems, resource management and scheduling are key challenges, where issues of task allocation and load balancing represent a common problem for most grid systems as because the load scenarios of individual grid resources are dynamic in nature. The objective of this paper is to review different existing load balancing algorithms or techniques applicable in grid computing and propose a layered service oriented framework for computational grid to solve the prevailing problem of dynamic load balancing.


Author(s):  
Chun-Yuan Lin ◽  
Jin Ye ◽  
Che-Lun Hung ◽  
Chung-Hung Wang ◽  
Min Su ◽  
...  

Current high-end graphics processing units (abbreviate to GPUs), such as NVIDIA Tesla, Fermi, Kepler series cards which contain up to thousand cores per-chip, are widely used in the high performance computing fields. These GPU cards (called desktop GPUs) should be installed in personal computers/servers with desktop CPUs; moreover, the cost and power consumption of constructing a high performance computing platform with these desktop CPUs and GPUs are high. NVIDIA releases Tegra K1, called Jetson TK1, which contains 4 ARM Cortex-A15 CPUs and 192 CUDA cores (Kepler GPU) and is an embedded board with low cost, low power consumption and high applicability advantages for embedded applications. NVIDIA Jetson TK1 becomes a new research direction. Hence, in this paper, a bioinformatics platform was constructed based on NVIDIA Jetson TK1. ClustalWtk and MCCtk tools for sequence alignment and compound comparison were designed on this platform, respectively. Moreover, the web and mobile services for these two tools with user friendly interfaces also were provided. The experimental results showed that the cost-performance ratio by NVIDIA Jetson TK1 is higher than that by Intel XEON E5-2650 CPU and NVIDIA Tesla K20m GPU card.


2010 ◽  
Vol 1 (1) ◽  
pp. 40-54 ◽  
Author(s):  
Carmelo Marcello Iacono-Manno ◽  
Marco Fargetta ◽  
Roberto Barbera ◽  
Alberto Falzone ◽  
Giuseppe Andronico ◽  
...  

The conjugation of High Performance Computing (HPC) and Grid paradigm with applications based on commercial software is one among the major challenges of today e-Infrastructures. Several research communities from either industry or academia need to run high parallel applications based on licensed software over hundreds of CPU cores; a satisfactory fulfillment of such requests is one of the keys for the penetration of this computing paradigm into the industry world and sustainability of Grid infrastructures. This problem has been tackled in the context of the PI2S2 project that created a regional e-Infrastructure in Sicily, the first in Italy over a regional area. Present article will describe the features added in order to integrate an HPC facility into the PI2S2 Grid infrastructure, the adoption of the InifiniBand low-latency net connection, the gLite middleware extended to support MPI/MPI2 jobs, the newly developed license server and the specific scheduling policy adopted. Moreover, it will show the results of some relevant use cases belonging to Computer Fluid-Dynamics (Fluent, OpenFOAM), Chemistry (GAMESS), Astro-Physics (Flash) and Bio-Informatics (ClustalW)).


Author(s):  
Dazhong Wu ◽  
Xi Liu ◽  
Steve Hebert ◽  
Wolfgang Gentzsch ◽  
Janis Terpenny

Cloud computing is an innovative computing paradigm that can potentially bridge the gap between increasing computing demands in computer aided engineering (CAE) applications and limited scalability, flexibility, and agility in traditional computing paradigms. In light of the benefits of cloud computing, high performance computing (HPC) in the cloud has the potential to enable users to not only accelerate computationally expensive CAE simulations (e.g., finite element analysis), but also to reduce costs by utilizing on-demand and scalable cloud computing resources. The objective of this research is to evaluate the performance of running a large finite element simulation in a public cloud. Specifically, an experiment is performed to identify individual and interactive effects of several factors (e.g., CPU core count, memory size, solver computational rate, and input/output rate) on run time using statistical methods. Our experimental results have shown that the performance of HPC in the cloud is sufficient for the application of a large finite element analysis, and that run time can be optimized by properly selecting a configuration of CPU, memory, and interconnect.


Sign in / Sign up

Export Citation Format

Share Document