Policy Driven Negotiation to Improve the QoS in Data Grid

Author(s):  
Ghalem Belalem

Data grids have become an interesting and popular domain in grid community (Foster and Kesselmann, 2004). Generally, the grids are proposed as solutions for large scale systems, where data replication is a well-known technique used to reduce access latency and bandwidth, and increase availability. In splitting of the advantages of replication, there are many problems that should be solved such as, • The replica placement that determines the optimal locations of replicated data in order to reduce the storage cost and data access (Xu et al, 2002); • The problem of determining which replica will be accessed to in terms of consistency when we need to execute a read or write operation (Ranganathan and Foster, 2001); • The problem of degree of replication which consists in finding a minimal number of replicas without reducing the performance of user applications; • The problem of replica consistency that concerns the consistency of a set of replicated data. This consistency provides a completely coherent view of all the replicas for a user (Gray et al 1996). Our principal aim, in this article, is to integrate into consistency management service, an approach based on an economic model for resolving conflicts detected in the data grid.

Author(s):  
Ghalem Belalem ◽  
Naima Belayachi ◽  
Radjaa Behidji ◽  
Belabbes Yagoubi

Data grids are current solutions to the needs of large scale systems and provide a set of different geographically distributed resources. Their goal is to offer an important capacity of parallel calculation, ensure a data effective and rapid access, improve the availability, and tolerate the breakdowns. In such systems, however, these advantages are possible only by using the replication technique. The use of this technique raises the problem of maintaining consistency of replicas of the same data set. In order to guarantee replica set reliability, it is necessary to have high coherence. This fact, however, penalizes performance. In this paper, the authors propose studying balancing influence on replica quality. For this reason, a service of hybrid consistency management is developed, which combines the pessimistic and optimistic approaches and is extended by a load balancing service to improve service quality. This service is articulated on a hierarchical model with two levels.


2010 ◽  
Vol 1 (4) ◽  
pp. 42-57 ◽  
Author(s):  
Ghalem Belalem ◽  
Naima Belayachi ◽  
Radjaa Behidji ◽  
Belabbes Yagoubi

Data grids are current solutions to the needs of large scale systems and provide a set of different geographically distributed resources. Their goal is to offer an important capacity of parallel calculation, ensure a data effective and rapid access, improve the availability, and tolerate the breakdowns. In such systems, however, these advantages are possible only by using the replication technique. The use of this technique raises the problem of maintaining consistency of replicas of the same data set. In order to guarantee replica set reliability, it is necessary to have high coherence. This fact, however, penalizes performance. In this paper, the authors propose studying balancing influence on replica quality. For this reason, a service of hybrid consistency management is developed, which combines the pessimistic and optimistic approaches and is extended by a load balancing service to improve service quality. This service is articulated on a hierarchical model with two levels.


Author(s):  
Ghalem Belalem

In order not to be limited in term of calculation, storage and communication, the concept of grid, which does not cease evolving, makes it possible to offer a practical operation of work unified as well as a great storage and computing power. To manage the division in the data grid, technical replication is used, but in spite of their advantages, the competitor access to the data could involve inconsistencies, from where the great challenge to ensure the consistency management between replicas of object. In this chapter, we describe model double-layered adapted to the applications on a large scale and which represents the support of the hybrid approach of consistency management of replicas based on pessimistic and optimistic approaches. This hybrid approach present an adapted mechanism based on the various negotiation forms between virtual consistency agents to be able to reduce the number of conflicts between replicas in data grids.


2013 ◽  
Vol 5 (1) ◽  
pp. 70-81 ◽  
Author(s):  
Mohammed K. Madi ◽  
Yuhanis Yusof ◽  
Suhaidi Hassan

Data Grid is an infrastructure that manages huge amount of data files, and provides intensive computational resources across geographically distributed collaboration. To increase resource availability and to ease resource sharing in such environment, there is a need for replication services. Data replication is one of the methods used to improve the performance of data access in distributed systems by replicating multiple copies of data files in the distributed sites. Replica placement mechanism is the process of identifying where to place copies of replicated data files in a Grid system. Existing work identifies the suitable sites based on number of requests and read cost of the required file. Such approaches consume large bandwidth and increases the computational time. The authors propose a replica placement strategy (RPS) that finds the best locations to store replicas based on four criteria, namely, 1) Read Cost, 2) File Transfer Time, 3) Sites’ Workload, and 4) Replication Sites. OptorSim is used to evaluate the performance of this replica placement strategy. The simulation results show that RPS requires less execution time and consumes less network usage compared to existing approaches of Simple Optimizer and LFU (Least Frequently Used).


Author(s):  
Ghalem Belalem ◽  
Belabbes Yagoubi ◽  
Samah Bouamama

Data Grids are currently solutions suggested to meet the needs of scale large systems. They provide highly varied and geographically distributed resources of which the goal is to ensure fast and effective data access. This improves availability, and tolerates breakdowns. In such systems, these advantages are not possible without the use of replication. The use of the technique of replication poses a problem in regards to the maintenance of the consistency of the same data replicas; the strategies of replication of the data and scheduling of jobs were tested by simulation. Several grid simulators were born. One of the most interesting simulators for this study is the OptorSim tool. In this chapter, the authors present an extension of the OptorSim by a consistency management module of the replicas in Data Grids; they propose a hybrid step which combines the economic models conceived for a hierarchical model with two levels. This suggested approach has two vocations, the first allowing a reduction in response times compared to an pessimistic approach, the second gives the good quality of service compared to optimistic approach.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Priyanka Vashisht ◽  
Rajesh Kumar ◽  
Anju Sharma

In data grids scientific and business applications produce huge volume of data which needs to be transferred among the distributed and heterogeneous nodes of data grids. Data replication provides a solution for managing data files efficiently in large grids. The data replication helps in enhancing the data availability which reduces the overall access time of the file. In this paper an algorithm, namely, EDRA using agents for data grid, has been proposed and implemented. EDRA consists of dynamic replication of hierarchical structure taken into account for the selection of best replica. Decision for selecting the best replica is based on scheduling parameters. The scheduling parameters are bandwidth, load gauge, and computing capacity of the node. The scheduling in data grid helps in reducing the data access time. The distribution of the load on the nodes of data grid is done evenly by considering scheduling parameters. EDRA is implemented using data grid simulator, namely, OptorSim. European Data Grid CMS test bed topology is used in this experiment. The simulation results are obtained by comparing BHR, LRU, No Replication, and EDRA. The result shows the efficiency of EDRA algorithm in terms of mean job execution time, network usage, and storage usage of node.


Sign in / Sign up

Export Citation Format

Share Document