scholarly journals Performance Evaluation of Computation Intensive Tasks in Grid

Author(s):  
P. Raghu ◽  
K. Sriram

Grid computing is a special type of parallel computing, which allows us to unite pools of servers, storage systems, and networks into a single large virtual super computer. Grid computing has the advantages of solving complex problems in a shorter time and also makes better use of the existing hardware. It can take advantage of underutilized resources to meet business requirements while minimizing additional costs. There are many Grid setup tools available. In this paper, Globus Toolkit, an open source tool for grid enabled applications, is considered. Initially grid is established between two systems running Linux, using Globus Toolkit. A simple matrix multiplication program, which is capable of running both in grid and stand alone systems, is developed. The application is executed in single system varying the order of the matrices. The same application is split into two sub jobs and run on two grid machines with different orders. Finally the results of the execution are compares and the results are presented in graphs. The work can be extended further to find the type of parallelizing suitable for the application developed. Similarly, FP tree algorithm is taken and the data sets are fed into different machine and in stand alone system. A suitable load balancing mechanism for grid application is discussed. The sections in the paper are arranged as following; Introduction to Grid, Grid setup using Globus toolkit, splitting of the matrix application, FP tree algorithm, performance results, future works, conclusion and references.

2021 ◽  
Vol 47 (2) ◽  
pp. 1-26
Author(s):  
Field G. Van Zee ◽  
Devangi N. Parikh ◽  
Robert A. Van De Geijn

We approach the problem of implementing mixed-datatype support within the general matrix multiplication ( gemm ) operation of the BLAS-like Library Instantiation Software framework, whereby each matrix operand A , B , and C may be stored as single- or double-precision real or complex values. Another factor of complexity, whereby the matrix product and accumulation are allowed to take place in a precision different from the storage precisions of either A or B , is also discussed. We first break the problem into orthogonal dimensions, considering the mixing of domains separately from mixing precisions. Support for all combinations of matrix operands stored in either the real or complex domain is mapped out by enumerating the cases and describing an implementation approach for each. Supporting all combinations of storage and computation precisions is handled by typecasting the matrices at key stages of the computation—during packing and/or accumulation, as needed. Several optional optimizations are also documented. Performance results gathered on a 56-core Marvell ThunderX2 and a 52-core Intel Xeon Platinum demonstrate that high performance is mostly preserved, with modest slowdowns incurred from unavoidable typecast instructions. The mixed-datatype implementation confirms that combinatorial intractability is avoided, with the framework relying on only two assembly microkernels to implement 128 datatype combinations.


2017 ◽  
Vol 2 (1) ◽  
pp. 7
Author(s):  
Izzatul Ummah

In this research, we build a grid computing infrastructure by utilizing existing cluster in Telkom University as back-end resources. We used middleware Globus Toolkit 6.0 and Condor 8.4.2 in developing the grid system. We tested the performance of our grid system using parallel matrix multiplication. The result showed that our grid system has achieved good performance. With the implementation of this grid system, we believe that access to high performance computing resources will become easier and the Quality of Service will also be improved.


1996 ◽  
Vol 5 (4) ◽  
pp. 301-317 ◽  
Author(s):  
Rajeev Thakur ◽  
Alok Choudhary

A number of applications on parallel computers deal with very large data sets that cannot fit in main memory. In such applications, data must be stored in files on disks and fetched into memory during program execution. Parallel programs with large out-of-core arrays stored in files must read/write smaller sections of the arrays from/to files. In this article, we describe a method for accessing sections of out-of-core arrays efficiently. Our method, the extended two-phase method, uses collective l/O: Processors cooperate to combine several l/O requests into fewer larger granularity requests, to reorder requests so that the file is accessed in proper sequence, and to eliminate simultaneous l/O requests for the same data. In addition, the l/O workload is divided among processors dynamically, depending on the access requests. We present performance results obtained from two real out-of-core parallel applications – matrix multiplication and a Laplace's equation solver – and several synthetic access patterns, all on the Intel Touchstone Delta. These results indicate that the extended two-phase method significantly outperformed a direct (noncollective) method for accessing out-of-core array sections.


Minerals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Valérie Laperche ◽  
Bruno Lemière

Portable X-ray fluorescence spectroscopy is now widely used in almost any field of geoscience. Handheld XRF analysers are easy to use, and results are available in almost real time anywhere. However, the results do not always match laboratory analyses, and this may deter users. Rather than analytical issues, the bias often results from sample preparation differences. Instrument setup and analysis conditions need to be fully understood to avoid reporting erroneous results. The technique’s limitations must be kept in mind. We describe a number of issues and potential pitfalls observed from our experience and described in the literature. This includes the analytical mode and parameters; protective films; sample geometry and density, especially for light elements; analytical interferences between elements; physical effects of the matrix and sample condition, and more. Nevertheless, portable X-ray fluorescence spectroscopy (pXRF) results gathered with sufficient care by experienced users are both precise and reliable, if not fully accurate, and they can constitute robust data sets. Rather than being a substitute for laboratory analyses, pXRF measurements are a valuable complement to those. pXRF improves the quality and relevance of laboratory data sets.


2017 ◽  
Vol 26 (1) ◽  
pp. 169-184 ◽  
Author(s):  
Absalom E. Ezugwu ◽  
Nneoma A. Okoroafor ◽  
Seyed M. Buhari ◽  
Marc E. Frincu ◽  
Sahalu B. Junaidu

AbstractThe operational efficacy of the grid computing system depends mainly on the proper management of grid resources to carry out the various jobs that users send to the grid. The paper explores an alternative way of efficiently searching, matching, and allocating distributed grid resources to jobs in such a way that the resource demand of each grid user job is met. A proposal of resource selection method that is based on the concept of genetic algorithm (GA) using populations based on multisets is made. Furthermore, the paper presents a hybrid GA-based scheduling framework that efficiently searches for the best available resources for user jobs in a typical grid computing environment. For the proposed resource allocation method, additional mechanisms (populations based on multiset and adaptive matching) are introduced into the GA components to enhance their search capability in a large problem space. Empirical study is presented in order to demonstrate the importance of operator improvement on traditional GA. The preliminary performance results show that the proposed introduction of an additional operator fine-tuning is efficient in both speed and accuracy and can keep up with high job arrival rates.


2012 ◽  
Vol 79 ◽  
pp. 41-46 ◽  
Author(s):  
Fabia Galantini ◽  
Sabrina Bianchi ◽  
Valter Castelvetro ◽  
Irene Anguillesi ◽  
Giuseppe Gallone

Among the broad class of electro-active polymers, dielectric elastomer actuators represent a rapidly growing technology for electromechanical transduction. In order to further develop this applied science, the high driving voltages currently needed must be reduced. For this purpose, one of the most promising and adopted approach is to increase the dielectric constant while maintaining both low dielectric losses and high mechanical compliance. In this work, a dielectric elastomer was prepared by dispersing functionalised carbon nanotubes into a polyurethane matrix and the effects of filler dispersion into the matrix were studied in terms of dielectric, mechanical and electro-mechanical performance. An interesting increment of the dielectric constant was observed throughout the collected spectrum while the loss factor remained almost unchanged with respect to the simple matrix, indicating that conductive percolation paths did not arise in such a system. Consequences of the chemical functionalisation of carbon nanotubes with respect to the use of unmodified filler were also studied and discussed along with rising benefits and drawbacks for the whole composite material.


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e3368 ◽  
Author(s):  
Joseph E. Peterson ◽  
Jonathan P. Warnock ◽  
Shawn L. Eberhart ◽  
Steven R. Clawson ◽  
Christopher R. Noto

The Cleveland-Lloyd Dinosaur Quarry (CLDQ) is the densest deposit of Jurassic theropod dinosaurs discovered to date. Unlike typical Jurassic bone deposits, it is dominated by the presence ofAllosaurus fragilis. Since excavation began in the 1920s, numerous hypotheses have been put forward to explain the taphonomy of CLDQ, including a predator trap, a drought assemblage, and a poison spring. In an effort to reconcile the various interpretations of the quarry and reach a consensus on the depositional history of CLDQ, new data is required to develop a robust taphonomic framework congruent with all available data. Here we present two new data sets that aid in the development of such a robust taphonomic framework for CLDQ. First, x-ray fluorescence of CLDQ sediments indicate elevated barite and sulfide minerals relative to other sediments from the Morrison Formation in the region, suggesting an ephemeral environment dominated by periods of hypereutrophic conditions during bone accumulation. Second, the degree of abrasion and hydraulic equivalency of small bone fragments dispersed throughout the matrix were analyzed from CLDQ. Results of these analyses suggest that bone fragments are autochthonous or parautochthonous and are derived from bones deposited in the assemblage rather than transported. The variability in abrasion exhibited by the fragments is most parsimoniously explained by local periodic re-working and re-deposition during seasonal fluctuations throughout the duration of the quarry assemblage. Collectively, these data support previous interpretations that the CLDQ represents an attritional assemblage in a poorly-drained overbank deposit where vertebrate remains were introduced post-mortem to an ephemeral pond during flood conditions. Furthermore, while the elevated heavy metals detected at the Cleveland-Lloyd Dinosaur Quarry are not likely the primary driver for the accumulation of carcasses, they are likely the result of multiple sources; some metals may be derived from post-depositional and diagenetic processes, and others are potentially produced from an abundance of decomposing vertebrate carcasses. These new data help to support the inferred depositional environment of the quarry as an ephemeral pond, and represent a significant step in understanding the taphonomy of the bonebed and Late Jurassic paleoecology in this region.


Telematika ◽  
2020 ◽  
Vol 17 (1) ◽  
pp. 26
Author(s):  
Afif Irfan Abdurrahman ◽  
Bambang Yuwono ◽  
Yuli Fauziah

Flood disaster is a dangerous disaster, an event that occurs due to overflow of water resulting in submerged land is called a flood disaster. Almost every year Bantul Regency is affected by floods due to high rainfall. The flood disaster that struck in Bantul Regency made the Bantul District Disaster Management Agency (BPBD) difficult to handle so that it needed a mapping of the level of the impact of the flood disaster to minimize the occurrence of floods and provide information to the public.This study will create a system to map the level of impact of floods in Bantul Regency with a decision support method namely Multi Attribute Utility Theory (MAUT). The MAUT method stage in determining the level of impact of flood disasters through the process of normalization and matrix multiplication. The method helps in determining the areas affected by floods, by managing the Indonesian Disaster Information Data (DIBI). The data managed is data on criteria for the death toll, lost victims, damage to houses, damage to public facilities, and damage to roads. Each criteria data has a value that can be used to determine the level of impact of a flood disaster. The stages for determining the level of impact of a disaster require a weighting calculation process. The results of the weighting process display the scoring value which has a value of 1 = low, 2 = moderate, 3 = high. To assist in determining the affected areas using the matrix normalization and multiplication process the process is the application of the Multi Attribute Utility Theory (MAUT) method.This study resulted in a mapping of the level of impact displayed on google maps. The map view shows the affected area points and the level of impact of the flood disaster in Bantul Regency. The mapping produced from the DIBI data in 2017 produced the highest affected area in the Imogiri sub-district. The results of testing the data can be concluded that the results of this study have an accuracy rate of 95% when compared with the results of the mapping previously carried out by BPBD Bantul Regency. The difference in the level of accuracy is because the criteria data used are not the same as the criteria data used by BPBD in Bantul Regency so that the accuracy rate is 95%.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Jin Wang

M M -2 semitensor product is a new and very useful mathematical tool, which breaks the limitation of traditional matrix multiplication on the dimension of matrices and has a wide application prospect. This article aims to investigate the solutions of the matrix equation A ° l X = B with respect to M M -2 semitensor product. The case where the solutions of the equation are vectors is discussed first. Compatible conditions of matrices and the necessary and sufficient condition for the solvability is studied successively. Furthermore, concrete methods of solving the equation are provided. Then, the case where the solutions of the equation are matrices is studied in a similar way. Finally, several examples are given to illustrate the efficiency of the results.


Author(s):  
Yevgeniy Bodyanskiy ◽  
Valentyna Volkova ◽  
Mark Skuratov

Matrix Neuro-Fuzzy Self-Organizing Clustering NetworkIn this article the problem of clustering massive data sets, which are represented in the matrix form, is considered. The article represents the 2-D self-organizing Kohonen map and its self-learning algorithms based on the winner-take-all (WTA) and winner-take-more (WTM) rules with Gaussian and Epanechnikov functions as the fuzzy membership functions, and without the winner. The fuzzy inference for processing data with overlapping classes in a neural network is introduced. It allows one to estimate membership levels for every sample to every class. This network is the generalization of a vector neuro- and neuro-fuzzy Kohonen network and allows for data processing as they are fed in the on-line mode.


Sign in / Sign up

Export Citation Format

Share Document