scholarly journals The parallelization of the Keller box method on heterogeneous cluster of workstations

Author(s):  
Norhafiza Hamzah ◽  
Norma Alias ◽  
Norsarahaida S.Amin

High performance computing is the branch of parallel computing dealing with very large problems and large parallel computers that can solve those problems in a reasonable amount of time. This paper will describe the parallelization of the Keller-box method using the high performance computing on heterogeneous cluster of workstations. The problem statement is based on the equation of boundary-layer flow due to a moving flat plate. The objective is to develop the parallel algorithm of the Keller-box method in purpose to solve a large size of matrix. The parallelization is based on the domain decomposition, where the upper and lower matrices will be splitting into a number of blocks, which then will be compute concurrently on the parallel computers. The experiment was run using 200, 2000, and 20000 size of matrices and using 10 number of processors. The comparison was made from the results obtained from that various size of matrices by doing the analysis based on the performance measurement in terms of time execution, speedup, and effectiveness.

Author(s):  
S.Y. Lapshina ◽  

The article is about the research of a optimum number of processor cores for launching the Parallel Cluster Multiple Labeling Technique on modern supercomputer systems installed in the JSCC RAS. This technique may be used in any field as a tool for differentiating large lattice clusters, because it is given input in a format independent of the application. At the JSCC RAS, this tool was used to study the problem of the spread of epidemics, for which an appropriate multiagent model was developed. In the course of imitation experiments, a variant of the Parallel Cluster Multiple Labeling Technique for percolation Hoshen-Kopelman clusters related to the tag linking mechanism, which can also be used in any area as a tool for differentiating large-size lattice clusters, was used to be improved on a multiprocessor system.


2016 ◽  
Vol 25 (3) ◽  
pp. 276-286 ◽  
Author(s):  
Nirmal Kaur ◽  
Savina Bansal ◽  
Rakesh Kumar Bansal

Efficient task scheduling of concurrent tasks is one of the primary requirements for high-performance computing platforms. Recent advances in high-performance computing have resulted in widespread performance improvement though at the cost of increased energy consumption and other system resources. In this article, an energy conscious scheduling algorithm with controlled threshold has been developed for precedence-constrained tasks on heterogeneous cluster, which aims at lower makespan along with reduced energy consumption. Energy conscious scheduling with controlled threshold algorithm combines the benefits of dynamic voltage scaling with controlled threshold-based duplication strategy to achieve its objectives. Effectiveness of the proposed algorithm is analyzed in comparison with available duplication- and non-duplication-based scheduling algorithms (with and without dynamic voltage scaling approach) to ascertain its performance and energy consumption. Exhaustive simulation results on random and real-world graphs demonstrate that energy conscious scheduling algorithm with controlled threshold has the potential to reduce energy consumption and makespan.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document