computational cluster
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 4)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Vol 16 (92) ◽  
pp. 60-71
Author(s):  
Alexander S. Fedulov ◽  
◽  
Yaroslav A. Fedulov ◽  
Anastasiya S. Fedulova ◽  
◽  
...  

This work is devoted to the problem of implementing an efficient parallel program that solves the asigned task using the maximum available amount of computing cluster resources in order to obtain the corresponding gain in performance with respect to the sequential version of the algorithm. The main objective of the work was to study the possibilities of joint use of the parallelization technologies OpenMP and MPI, considering the characteristics and features of the problems being solved, to increase the performance of executing parallel algorithms and programs on a computing cluster. This article provides a brief overview of approaches to calculating the sequential programs complexity functions. To determine the parallel programs complexity, an approach based on operational analysis was used. The features of the sequential programs parallelization technologies OpenMP and MPI are described. The main software and hardware factors affecting the execution speed of parallel programs on the nodes of a computing cluster are presented. The main attention in this paper is paid to the study of the impact on performance of computational and exchange operations number ratio in programs. To implement the research, parallel OpenMP and MPI testing programs were developed, in which the total number of operations and the correlation between computational and exchange operations are set. A computing cluster consisting of several nodes was used as a hardware and software platform. Experimental studies have made it possible to confirm the effectiveness of the hybrid model of a parallel program in multi-node systems with heterogeneous memory using OpenMP in shared memory subsystems, and MPI in a distributed memory subsystems


2021 ◽  
Author(s):  
Roberta B. Nowak ◽  
Haleh Alimohamadi ◽  
Kersi Pestonjamasp ◽  
Padmini Rangamani ◽  
Velia M. Fowler

AbstractRed blood cell (RBC) shape and deformability are supported by a planar network of short actin filament (F-actin) nodes interconnected by long spectrin molecules at the inner surface of the plasma membrane. Spectrin-F-actin network structure underlies quantitative modelling of forces controlling RBC shape, membrane curvature and deformation, yet the nanoscale organization of F-actin nodes in the network in situ is not understood. Here, we examined F-actin distribution in RBCs using fluorescent-phalloidin labeling of F-actin imaged by multiple microscopy modalities. Total internal reflection fluorescence (TIRF) and Zeiss Airyscan confocal microscopy demonstrate that F-actin is concentrated in multiple brightly stained F-actin foci ∼200-300 nm apart interspersed with dimmer F-actin staining regions. Live cell imaging reveals dynamic lateral movements, appearance and disappearance of F-actin foci. Single molecule STORM imaging and computational cluster analysis of experimental and synthetic data sets indicate that individual filaments are non-randomly distributed, with the majority as multiple filaments, and the remainder sparsely distributed as single filaments. These data indicate that F-actin nodes are non-uniformly distributed in the spectrin-F-actin network and necessitate reconsideration of current models of forces accounting for RBC shape and membrane deformability, predicated upon uniform distribution of F-actin nodes and associated proteins across the micron-scale RBC membrane.


2019 ◽  
Vol 2 (2) ◽  
pp. 42
Author(s):  
Paulo Andre Lima De Castro ◽  
Anderson R.B. Teodoro

Financial operations involve a significant amount of resources and can directly or indirectly affect the lives of virtually all people. For the efficiency and transparency in this context, it is essential to identify financial crimes and to punish the responsible. However, the large number of operations makes it unfeasible for analyzes made exclusively by humans. Thus, the application of automated data analysis techniques is essential. Within this scenario, this work presents a method that identifies anomalies that may be associated with operations in the stock exchange market prohibited by law. Specifically, we seek to find patterns related to insider trading. These types of operations can generate big losses for investors. In this work, publicly available information by the SEC and CVM, based on real cases on BOVESPA, NYSE and NASDAQ stock exchanges, is used as a training base. The method includes the creation of several candidate variables and the identification of relevant variables. With this definition, classifiers based on decision trees and Bayesian networks are constructed, and, after, evaluated and selected. The computational cost of performing such tasks can be quite significant, and it grows quickly with the amount of analyzed data. For this reason, the method considers the use of machine learning algorithms distributed in a computational cluster. In order to perform such tasks, we use the Weka framework with modules that allows the distribution of the processing load in a Hadoop cluster. The use of a computational cluster to execute learning algorithms in a large amount of data has been an active area of research, and this work contributes to the analysis of data in the specific context of financial operations. The obtained results show the feasibility of the approach, although the quality of the results is limited by the exclusive use of publicly available data.


2018 ◽  
Author(s):  
Marissa Buzzanca ◽  
Brandon Brummeyer ◽  
Jonathan Gutow

<div> <div> <div>Vertical ionization potentials (IPs) computed using the IP-EOMCCSD method are reported for 53 medium sized molecules (6 – 32 atoms) and compared with average experimental vertical IPs. The calculations are practical on a modest computational cluster and yield good agreement with experimental values using the aug-cc-pVDZ basis set, with an average deviation from the experimental IP of −0.04 eV. The accuracy of IP computations appears to be approaching the point where possible systematic experimental errors can be identified. Although good extrapolations to the complete basis set limit for the IP are achievable using just the aug-cc-pVDZ and aug-cc-pVTZ basis sets, deviations of the extrapolation from experimental values suggest that inclusion of higher order "triples" may make the computational method more broadly applicable. Examination of experimental spectra for ethylene, E-2-butene, 2,5-dihydrofuran and pyrrole reinforces the observations of Davidson and Jarzęcki1 that experimental vertical IPs are usually extracted from experimental data in a manner that does not account for band asymmetries, making direct comparison to computations difficult. Despite the good agreement with experiment when using the aug-cc-pVDZ basis set, for the molecules investigated most of these reported experimental IPs are below the actual value, likely by no more than 0.4 eV. This set of 53 molecules is recommended as a benchmark comparison set for computational and experimental IP results.<br></div> </div> </div>


2018 ◽  
Vol 210 ◽  
pp. 04029
Author(s):  
Vivian Orejuela ◽  
Álvaro Sánchez Ramirez ◽  
Andrés Felipe Toro ◽  
Andrés Felipe Gonzalez ◽  
Diego Briñez

Increase the processing power in less time with high performance low-cost process is important for universities, for this reason computational clusters take paramount importance nowadays. In this paper it is shown the study of the performance time of a group that has been physically configured with a node and 3 nodes with the Core 2 Duo Quad processors, which have a Scientific Linux operating system and a cluster management software called HTCONDOR which is described time in the C language, to obtain the prime numbers in a range of 1 to 15 million and thus be able to launch the process from the point of destination to the worker nodes divided by groups of 5 million and take the measure of time. Achieving the difference between runtime on a PC and a high-performance cluster.


Author(s):  
Mikhail E. Kryuchkov ◽  
Aleksei V. Orlov ◽  
Grigoriy A. Mazurenko ◽  
Aleksandr B. Vavrenyuk ◽  
Yuriy V. Timofeev

Author(s):  
Vladislav Vladislavovich Lukashenko ◽  
Vitaly Aleksandrovich Romanchuk

In terms of the problem of insufficient computational resources for a number of tasks, realization of computational cluster of neurocomputers is being considered as a variant. To implement the basic principle of distributed computing, there has been presented an algorithm for splitting the tasks entering the computational cluster of neurocomputers into sub-tasks. For this purpose, the program introduced into the cluster is suggested to present for execution in the modified postfix Polish record and to store it in the program command stack. To modify the program Polish notation should include different, non-arithmetic, operators and constructions. The next step is to get an abstract syntax tree of the program, following the rules for translating the modified postfix Polish record from the command stack into an abstract syntax tree. Then, the data should be sent to the abstract syntax tree of the program taking into account their bit depth, and to obtain the contiguity matrix of the program control flow graph that will display the set of all ways of program execution. The authors come to the conclusion that all operations recorded in the modified reverse Polish record presented in the form of an abstract syntax tree when data of a certain bit depth are transmitted to them, at the moment of transition to the program control flow graph executed in a single clock cycle are indivisible operations and can be represented as subprograms of the source program, which was submitted for processing to the computer cluster of neurocomputers.


Sign in / Sign up

Export Citation Format

Share Document