Reliability Based Scheduling Model (RSM) for Computational Grids

Author(s):  
Zahid Raza ◽  
Deo P. Vidyarthi

Computational Grid attributed with distributed load sharing has evolved as a platform to large scale problem solving. Grid is a collection of heterogeneous resources, offering services of varying natures, in which jobs are submitted to any of the participating nodes. Scheduling these jobs in such a complex and dynamic environment has many challenges. Reliability analysis of the grid gains paramount importance because grid involves a large number of resources which may fail anytime, making it unreliable. These failures result in wastage of both computational power and money on the scarce grid resources. It is normally desired that the job should be scheduled in an environment that ensures maximum reliability to the job execution. This work presents a reliability based scheduling model for the jobs on the computational grid. The model considers the failure rate of both the software and hardware grid constituents like application demanding execution, nodes executing the job, and the network links supporting data exchange between the nodes. Job allocation using the proposed scheme becomes trusted as it schedules the job based on a priori reliability computation.

2011 ◽  
Vol 2 (2) ◽  
pp. 20-37 ◽  
Author(s):  
Zahid Raza ◽  
Deo P. Vidyarthi

Computational Grid attributed with distributed load sharing has evolved as a platform to large scale problem solving. Grid is a collection of heterogeneous resources, offering services of varying natures, in which jobs are submitted to any of the participating nodes. Scheduling these jobs in such a complex and dynamic environment has many challenges. Reliability analysis of the grid gains paramount importance because grid involves a large number of resources which may fail anytime, making it unreliable. These failures result in wastage of both computational power and money on the scarce grid resources. It is normally desired that the job should be scheduled in an environment that ensures maximum reliability to the job execution. This work presents a reliability based scheduling model for the jobs on the computational grid. The model considers the failure rate of both the software and hardware grid constituents like application demanding execution, nodes executing the job, and the network links supporting data exchange between the nodes. Job allocation using the proposed scheme becomes trusted as it schedules the job based on a priori reliability computation.


Author(s):  
Rekha Kashyap ◽  
Deo P. Vidyarthi

Grid supports heterogeneities of resources in terms of security and computational power. Applications with stringent security requirement introduce challenging concerns when executed on the grid resources. Though grid scheduler considers the computational heterogeneity while making scheduling decisions, little is done to address their security heterogeneity. This work proposes a security aware computational grid scheduling model, which schedules the tasks taking into account both kinds of heterogeneities. The approach is known as Security Prioritized MinMin (SPMinMin). Comparing it with one of the widely used grid scheduling algorithm MinMin (secured) shows that SPMinMin performs better and sometimes behaves similar to MinMin under all possible situations in terms of makespan and system utilization.


2020 ◽  
Vol 3 (1) ◽  
pp. 61-67
Author(s):  
Tatyana N. Yesikova ◽  
Svetlana V. Vakhrusheva

The paper considers the issues of accounting and reflection in multi-agent systems of the influence of the information environment, information flows on agent behavior and the assessment of consequences, including environmental ones, of decisions made by them at various stages of large-scale infrastructure projects. The information space is a priori a multidimensional dynamic environment that is continuously updated and transformed, sometimes under the primacy of the interests of individual agents or influence groups, and much less frequently from the standpoint of ensuring the viability of the economic system as a whole. A large-scale project for the construction of a transcontinental highway (TKS) through the Bering Strait was chosen as the object of study. The article provides a fairly detailed description of the groups of agents involved in the decision-making process, as well as the elements of the information space that are significant for an agent at certain stages of its activity. To model the influence of the information space on decision-making processes by agents of different hierarchy levels (business entities, managerial entities, etc.), algorithms and special procedures have been developed.


Author(s):  
Uei-Ren Chen ◽  
Yun-Ching Tang

To build a large-scale computational grid resource model with realistic characteristics, this paper proposes a statistical remedy to approximate the distributions of computational and communicational abilities of resources. After fetching the source data of resources including computing devices and networks from the real world, the gamma and normal distributions are employed to approximate probabilities for the ability distribution of grid resources. With this method, researchers can supply the computational grid simulators with required parameters relating to computational and communicational abilities. The experiment result shows that proposed methodology can lead to a better precision according to the error measurement of the simulated data and its source. The proposed method can support existing modern grid simulators with a high-precision resource model which can consider the characteristics of distribution for its computational and communicational abilities in the grid computing environment.


Author(s):  
Zahid Raza ◽  
Deo P. Vidyarthi

Scheduling a job on the grid is an NP Hard problem, and hence a number of models on optimizing one or other characteristic parameters have been proposed in the literature. It is expected from a computational grid to complete the job quickly in most reliable grid environment owing to the number of participants in the grid and the scarcity of the resources available. Genetic algorithm is an effective tool in solving problems that requires sub-optimal solutions and finds uses in multi-objective optimization problems. This paper addresses a multi-objective optimization problem by introducing a scheduling model for a modular job on a computational grid with a dual objective, minimizing the turnaround time and maximizing the reliability of the job execution using NSGA – II, a GA variant. The cost of execution on a node is measured on the basis of the node characteristics, the job attributes and the network properties. Simulation study and a comparison of the results with other similar models reveal the effectiveness of the model.


2010 ◽  
Vol 1 (2) ◽  
pp. 74-94 ◽  
Author(s):  
Zahid Raza ◽  
Deo Prakash Vidyarthi

Scheduling a job on the grid is an NP Hard problem, and hence a number of models on optimizing one or other characteristic parameters have been proposed in the literature. It is expected from a computational grid to complete the job quickly in most reliable grid environment owing to the number of participants in the grid and the scarcity of the resources available. Genetic algorithm is an effective tool in solving problems that requires sub-optimal solutions and finds uses in multi-objective optimization problems. This paper addresses a multi-objective optimization problem by introducing a scheduling model for a modular job on a computational grid with a dual objective, minimizing the turnaround time and maximizing the reliability of the job execution using NSGA – II, a GA variant. The cost of execution on a node is measured on the basis of the node characteristics, the job attributes and the network properties. Simulation study and a comparison of the results with other similar models reveal the effectiveness of the model.


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
Kwang-il Hwang ◽  
Sung-wook Nam

In order to construct a successful Internet of things (IoT), reliable network construction and maintenance in a sensor domain should be supported. However, IEEE 802.15.4, which is the most representative wireless standard for IoT, still has problems in constructing a large-scale sensor network, such as beacon collision. To overcome some problems in IEEE 802.15.4, the 15.4e task group proposed various different modes of operation. Particularly, the IEEE 802.15.4e deterministic and synchronous multichannel extension (DSME) mode presents a novel scheduling model to solve beacon collision problems. However, the DSME model specified in the 15.4e draft does not present a concrete design model but a conceptual abstract model. Therefore, in this paper we introduce a DSME beacon scheduling model and present a concrete design model. Furthermore, validity and performance of DSME are evaluated through experiments. Based on experiment results, we analyze the problems and limitations of DSME, present solutions step by step, and finally propose an enhanced DSME beacon scheduling model. Through additional experiments, we prove the performance superiority of enhanced DSME.


2015 ◽  
Vol 15 (4) ◽  
pp. 583-592 ◽  
Author(s):  
Jing Yu ◽  
Xianwen Bao ◽  
Yang Ding ◽  
Wei Zhang ◽  
Lingling Zhou

2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Shanghong Zhang ◽  
Wenda Li ◽  
Zhu Jing ◽  
Yujun Yi ◽  
Yong Zhao

Three parallel methods (OpenMP, MPI, and OpenACC) are evaluated for the computation of a two-dimensional dam-break model using the explicit finite volume method. A dam-break event in the Pangtoupao flood storage area in China is selected as a case study to demonstrate the key technologies for implementing parallel computation. The subsequent acceleration of the methods is also evaluated. The simulation results show that the OpenMP and MPI parallel methods achieve a speedup factor of 9.8× and 5.1×, respectively, on a 32-core computer, whereas the OpenACC parallel method achieves a speedup factor of 20.7× on NVIDIA Tesla K20c graphics card. The results show that if the memory required by the dam-break simulation does not exceed the memory capacity of a single computer, the OpenMP parallel method is a good choice. Moreover, if GPU acceleration is used, the acceleration of the OpenACC parallel method is the best. Finally, the MPI parallel method is suitable for a model that requires little data exchange and large-scale calculation. This study compares the efficiency and methodology of accelerating algorithms for a dam-break model and can also be used as a reference for selecting the best acceleration method for a similar hydrodynamic model.


Author(s):  
Ting-Hsuan Wang ◽  
Cheng-Ching Huang ◽  
Jui-Hung Hung

Abstract Motivation Cross-sample comparisons or large-scale meta-analyses based on the next generation sequencing (NGS) involve replicable and universal data preprocessing, including removing adapter fragments in contaminated reads (i.e. adapter trimming). While modern adapter trimmers require users to provide candidate adapter sequences for each sample, which are sometimes unavailable or falsely documented in the repositories (such as GEO or SRA), large-scale meta-analyses are therefore jeopardized by suboptimal adapter trimming. Results Here we introduce a set of fast and accurate adapter detection and trimming algorithms that entail no a priori adapter sequences. These algorithms were implemented in modern C++ with SIMD and multithreading to accelerate its speed. Our experiments and benchmarks show that the implementation (i.e. EARRINGS), without being given any hint of adapter sequences, can reach comparable accuracy and higher throughput than that of existing adapter trimmers. EARRINGS is particularly useful in meta-analyses of a large batch of datasets and can be incorporated in any sequence analysis pipelines in all scales. Availability and implementation EARRINGS is open-source software and is available at https://github.com/jhhung/EARRINGS. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document