2018 ◽  
Vol 23 (21) ◽  
pp. 11035-11054 ◽  
Author(s):  
Zhao Tong ◽  
Hongjian Chen ◽  
Xiaomei Deng ◽  
Kenli Li ◽  
Keqin Li

2012 ◽  
Vol 4 (4) ◽  
pp. 68-88
Author(s):  
Chao-Tung Yang ◽  
Wen-Feng Hsieh

This paper’s objective is to implement and evaluate a high-performance computing environment by clustering idle PCs (personal computers) with diskless slave nodes on campuses to obtain the effectiveness of the largest computer potency. Two sets of Cluster platforms, BCCD and DRBL, are used to compare computing performance. It’s to prove that DRBL has better performance than BCCD in this experiment. Originally, DRBL was created to facilitate instructions for a Free Software Teaching platform. In order to achieve the purpose, DRBL is applied to the computer classroom with 32 PCs so to enable PCs to be switched manually or automatically among different OS (operating systems). The bioinformatics program, mpiBLAST, is executed smoothly in the Cluster architecture as well. From management’s view, the state of each Computation Node in Clusters is monitored by “Ganglia”, an existing Open Source. The authors gather the relevant information of CPU, Memory, and Network Load for each Computation Node in every network section. Through comparing aspects of performance, including performance of Swap and different network environment, they attempted to find out the best Cluster environment in a computer classroom at the school. Finally, HPL of HPCC is used to demonstrate cluster performance.


2014 ◽  
Vol 631-632 ◽  
pp. 1053-1056
Author(s):  
Hui Xia

The paper addressed the issues of limited resource for data optimization for efficiency, reliability, scalability and security of data in distributed, cluster systems with huge datasets. The study’s experimental results predicted that the MapReduce tool developed improved data optimization. The system exhibits undesired speedup with smaller datasets, but reasonable speedup is achieved with a larger enough datasets that complements the number of computing nodes reducing the execution time by 30% as compared to normal data mining and processing. The MapReduce tool is able to handle data growth trendily, especially with larger number of computing nodes. Scaleup gracefully grows as data and number of computing nodes increases. Security of data is guaranteed at all computing nodes since data is replicated at various nodes on the cluster system hence reliable. Our implementation of the MapReduce runs on distributed cluster computing environment of a national education web portal and is highly scalable.


Sign in / Sign up

Export Citation Format

Share Document