cluster computer
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 1 (1) ◽  
pp. 46-56
Author(s):  
S. M. Babchuk ◽  
Т. V. Humeniuk ◽  
I. T. Romaniv

Context. High-performance computing systems are needed to solve many scientific problems and to work with complex applied problems. Previously, real parallel data processing was supported only by supercomputers, which are very limited and difficult to access. Currently, one way to solve this problem is to build small, cheap clusters based on single-board computers Raspberry Pi. Objective. The goal of the work is the creation of a complex criterion for the efficiency of the cluster system, which could properly characterize the operation of such a system and find the dependences of the performance of the cluster system based on Raspberry Pi 3B+ on the number of boards in it with different cooling systems. Method. It is offered to apply in the analysis of small cluster computer systems the complex criterion of efficiency of work of cluster system which will consider the general productivity of cluster computer system, productivity of one computing element in cluster computer system, electricity consumption by cluster system, electricity consumption per one computing element, the cost of calculating 1 Gflops cluster computer system, the total cost of the cluster computer system. Results. The developed complex criterion of cluster system efficiency was used to create an experimental cluster system based on single-board computers Raspberry Pi 3B+. Mathematical models of the dependence of the performance of a small cluster system based on single-board computers Raspberry Pi 3B+ depending on the number of boards in it with different cooling systems have also been developed. Conclusions. The conducted experiments confirmed the expediency of using the developed complex criterion of efficiency of the cluster system and allow to recommend it for use in practice when creating small cluster systems. Prospects for further research are to determine the weights of the constituent elements of the complex criterion of efficiency of the cluster system, as well as in the experimental study of the proposed weights.


2020 ◽  
Vol 13 (2) ◽  
pp. 1-9
Author(s):  
Farid Jatri Abiyyu ◽  
Ibnu Ziad ◽  
Ade Silvia Handayani

Diskless server is a cluster computer network which uses SSH (Secure Shell) protocol to grant the client an access to the host's directory and modify it's content so that the client don't need a hardisk (Thin Client). One way to design a diskless server is by utilizing "Linux Terminal Server Project", an open source-based script for Linux. However, using Linux has it own drawback, such as it can't cross platform for running an aplication based on Windows system which are commonly used. This drawback can be overcomed by using a compatibility layer that converts a windows-based application's source code. The data which will be monitored is the compatibility layer implementation's result, and the throughput, packet loss, delay, and jitter. The result of measurement from those four parameters resulting in "Excellent" for throughput, "Perfect" for packet loss and delay, and "Good" for jitter.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Bing He ◽  
Long Tang ◽  
Jiang Xie ◽  
XiaoWei Wang ◽  
AnPing Song

Using parallel computation can enhance the performance of numerical simulation of electromagnetic radiation and get great runtime reduction. We simulate the electromagnetic radiation calculation based on the multicore CPU and GPU Parallel Architecture Clusters by using MPI-OpenMP and MPI-CUDA hybrid parallel algorithm. This is an effective solution comparing to the traditional finite-difference time-domain method which has a shortage in the calculation of the electromagnetic radiation on the problem of inadequate large data space and time. What is more, we use regional segmentation, subregional data communications, consolidation, and other methods to improve procedures nested parallelism and finally verify the correctness of the calculation results. Studying these two hybrid models of parallel algorithms run on the high-performance cluster computer, we draw the conclusion that both models are suitable for large-scale numerical calculations, and MPI-CUDA hybrid model can achieve higher speedup.


Author(s):  
J. Tindle ◽  
M. Gray ◽  
R.L. Warrender ◽  
K. Ginty ◽  
P.K.D. Dawson

This chapter describes the performance of a compute cluster applied to solve Three Dimensional (3D) molecular modelling problems. The primary goal of this work is to identify new potential drugs. The chapter focuses upon the following issues: computational chemistry, computational efficiency, task scheduling, and the analysis of system performance. The philosophy of design for an Application Framework for Computational Chemistry (AFCC) is described. Eighteen months after the release of the original chapter, the authors have examined a series of changes adopted which have led to improved system performance. Various experiments have been carried out to optimise the performance of a cluster computer, the results analysed, and the statistics produced are discussed in the chapter.


2013 ◽  
Vol 53 (A) ◽  
pp. 829-831
Author(s):  
Memmo Federici ◽  
Bruno L. Martino

Simulation of the interactions between particles and matter in studies for developing X-rays detectors generally requires very long calculation times (up to several days or weeks). These times are often a serious limitation for the success of the simulations and for the accuracy of the simulated models. One of the tools used by the scientific community to perform these simulations is Geant4 (Geometry And Tracking) [2, 3]. On the best of experience in the design of the AVES cluster computing system, Federici et al. [1], the IAPS (Istituto di Astrofisica e Planetologia Spaziali INAF) laboratories were able to develop a cluster computer system dedicated to Geant 4. The Cluster is easy to use and easily expandable, and thanks to the design criteria adopted it achieves an excellent compromise between performance and cost. The management software developed for the Cluster splits the single instance of simulation on the cores available, allowing the use of software written for serial computation to reach a computing speed similar to that obtainable from a native parallel software. The simulations carried out on the Cluster showed an increase in execution time by a factor of 20 to 60 compared to the times obtained with the use of a single PC of medium quality.


2013 ◽  
Vol 141 (9) ◽  
pp. 3052-3061 ◽  
Author(s):  
Hoon Park ◽  
Song-You Hong ◽  
Hyeong-Bin Cheong ◽  
Myung-Seo Koo

Abstract This study describes an application of the double Fourier series (DFS) spectral method developed by Cheong as an alternative dynamical option in a model system that was ported into the Global/Regional Integrated Model System (GRIMs). A message passing interface (MPI) for a massive parallel-processor cluster computer devised for the DFS dynamical core is also presented. The new dynamical core with full physics was evaluated against a conventional spherical harmonics (SPH) dynamical core in terms of short-range forecast capability for a heavy rainfall event and seasonal simulation framework. Comparison of the two dynamical cores demonstrates that the new DFS dynamical core exhibits performance comparable to the SPH in terms of simulated climatology accuracy and the forecast of a heavy rainfall event. Most importantly, the DFS algorithm guarantees improved computational efficiency in the cluster computer as the model resolution increases, which is consistent with theoretical values computed from a dry primitive equation model framework. The current study shows that, at higher resolutions, the DFS approach can be a competitive dynamical core because the DFS algorithm provides the advantages of both the spectral method for high numerical accuracy and the gridpoint method for high performance computing in the aspect of computational cost.


2013 ◽  
Vol 22 (02) ◽  
pp. 1250090
Author(s):  
TIBOR SKALA ◽  
MIRSAD TODOROVAC ◽  
KAROLJ SKALA

Paper presents the achievement on parallelized distributed reliable rendering method shown on 3D electromotor designed to be performed on Cluster computer system. In this paper we will describe a proof of concept model of rendering using parametric POVRay model, client-server on-demand rendering architecture with RFC 822 + 1 SMTP and HTTP like keep-alive extensible protocol, using open source elements, configurable rendering architecture, available to use at no cost added to cost of hardware, with new solutions for simple configuration file mechanisms, and resilient behavior of software, based on best effort strategy of work distribution. The paper presents an innovative way of creating distributed reliable rendering method using huge range of computers for parametric modeling. The main aim of this paper is to present a solution that will use existing resources in order to make modeling method more sufficient and for applying complex computer services/jobs under distributed architecture. The implementation also includes logistics and support for automatic computer clustering and for service/job programs execution. The primary goal is to use existing resources for useful applications in the 3D parametric modeling, image programming and simulation rendering.


Sign in / Sign up

Export Citation Format

Share Document