Replication Methods and Their Properties

Author(s):  
Lars Frank

The most important evaluation criteria for replication methods are availability, performance, consistency, and costs. Performance and response time may be improved by substituting remote data accesses with local data accesses to replicated data. The availability of the system can be increased by using replicated data in case a local failure or disaster should occur. The major disadvantages of data replication are the additional costs of updating replicated data and the problems related to managing the consistency of the replicated data. Tables 1 and 2 give an overview of the evaluation of the replication methods described in this article. Frank (1999) described how such replication overviews may be used to optimize databases in practice. This article evaluates many more replication methods and therefore, it is possible to optimize even more. However, the evaluation criteria previously described have to be subdivided to illustrate the different properties of the different replication methods.

2001 ◽  
Vol 02 (03) ◽  
pp. 317-329 ◽  
Author(s):  
MUSTAFA MAT DERIS ◽  
ALI MAMAT ◽  
PUA CHAI SENG ◽  
MOHD YAZID SAMAN

This article addresses the performance of data replication protocol in terms of data availability and communication costs. Specifically, we present a new protocol called Three Dimensional Grid Structure (TDGS) protocol, to manage data replication in distributed system. The protocol provides high availability for read and write operations with limited fault-tolerance at low communication cost. With TDGS protocol, a read operation is limited to two data copies, while a write operation is required with minimal number of copies. In comparison to other protocols. TDGS requires lower communication cost for an operation, while providing higher data availability.


2016 ◽  
Vol 11 (2) ◽  
pp. 126-134
Author(s):  
Ma Haifeng ◽  
Gao Zhenguo ◽  
Yao Nianmin

Cloud storage service enables users to migrate their data and applications to the cloud, which saves the local data maintenance and brings great convenience to the users. But in cloud storage, the storage servers may not be fully trustworthy. How to verify the integrity of cloud data with lower overhead for users has become an increasingly concerned problem. Many remote data integrity protection methods have been proposed, but these methods authenticated cloud files one by one when verifying multiple files. Therefore, the computation and communication overhead are still high. Aiming at this problem, a hierarchical remote data possession checking (hierarchical-remote data possession checking (H-RDPC)) method is proposed, which can provide efficient and secure remote data integrity protection and can support dynamic data operations. This paper gives the algorithm descriptions, security, and false negative rate analysis of H-RDPC. The security analysis and experimental performance evaluation results show that the proposed H-RDPC is efficient and reliable in verifying massive cloud files, and it has 32–81% improvement in performance compared with RDPC.


2000 ◽  
Vol 75 (6) ◽  
pp. 247-253 ◽  
Author(s):  
Ing-Ray Chen ◽  
Ding-Chau Wang ◽  
Chih-Ping Chu

2016 ◽  
Vol 1 (1) ◽  
pp. 145-158 ◽  
Author(s):  
Hualong Wu ◽  
Bo Zhao

AbstractThe emergence of cloud computing brings the infinite imagination space, both in individual and organizations, due to its unprecedented advantages in the IT history: on-demand self-service, ubiquitous network access, location independent resource pooling, rapid resource elasticity, usage-based pricing and transference of risk. Many of the individuals or organizations ease the pressure on their local data storage, and mitigate the maintenance overhead of local data storage by using outsource data to cloud. However, the data outsourcing is not absolutely safe in the cloud. In order to enhance the users’ confidence of the integrity of their outsource data in the cloud. To promote the rapid deployment of cloud data storage service and regain security assurances with outsourced data dependability, many scholars tend to design the Remote Data Auditing (RDA) technique as a new concept to enable public auditability for the outsourced data in the cloud. The RDA is a useful technique to ensure the correctness of the data outsourced to cloud servers. This paper presents a comprehensive survey on techniques of remote data auditing in cloud server. Recently, more and more remote auditing approaches are categorized into the three different classes, that is, replication-based, erasure coding-based, and network coding-based to present a taxonomy. This paper also aims to the explore major issues.


Author(s):  
K. Sasikumar ◽  
B. Vijayakumar

In this paper, we performed a comparative study of the different data replication strategies such as Adaptive Data Replication Strategy (ADRS), Dynamic Cost Aware Re-Replication and Rebalancing Strategy (DCR2S) and Efficient Placement Algorithm (EPA) in the cloud environment. The implementation of these three techniques is done in JAVA and the performance analysis is conducted to study the performance of those replication techniques by various parameters. The parameters used for the performance analysis of these three techniques are Load Variance, Response Time, Probability of File Availability, System Byte Effective Rate (SBER), Latency, and Fault Ratio. From the analysis, it is evaluated that by varying the number of file replicas, it shows deviations in the outcomes of these parameters. The comparative results were also analyzed.


2021 ◽  
Author(s):  
Deepak Kumar Dash ◽  
Dileep Chandran Nair ◽  
Srinivas Potluri

Abstract For drilling contractors, the moment of truth is the operations at the site. If the technician at the site encounters a problem he can't solve, then everything stops. The team has to wait for a subject matter expert (SME) to arrive at the site to diagnose rectify the problem. Such process of SME mobilization and till that time Non-Productive Time (NPT) results in loss of hundreds of thousands of dollars. Hence the key challenge is converting the Sparse to Adequate availability of Right Knowledge at Right Time at Right Place, for the support of technicians. This paper is focused on the approach of moving from Hand Held devices to Hands-Free environment at sites and connecting local/global support to site support systems, to reduce cost, improve HSE and enhance operational performance. The augmented reality technology-enabled, smart glass laced headsets are rugged, zone 1 certified, and are voice-operated which are better than smart tablets which were considered during Technology Qualification Process. Evaluation criteria were: 1. Availability and follow up of the digital work instruction while operating. Moreover, not missing a single step of work instruction while inspection or maintenance continues was noted carefully. 2. Reduced travel/accommodation cost : Normally at the time of shutdown, the rig crew contacts subject matter experts (SME) and (at times) in turn the SME contacts the OEM support team to mobilize service engineers globally. 3. Response time improvement-Availability of support by SME right at the time of need results from better response time to diagnose and fix the issue at hand. Call logging till final resolution process improvement is considered an important metric. Travel restrictions imposed by Covid-19, are also being addressed through the distanced inspection. A hands-free environment is compared vis a vis handheld device. Better training and knowledge transfer are achieved through better communication methods and this goes better with learning by doing. Subsequent text (NLP-speech to text) analysis is planned through deep learning models to derive related predictions. Sparse to Adequate availability of support to rig staff with Right Knowledge at Right Place at Right Time is the key outcome of this Proof of Value project.


Author(s):  
Vassilios V. Dimakopoulos ◽  
Spiridoula Margariti ◽  
Mirto Ntetsika ◽  
Evaggelia Pitoura

Maintaining multiple copies of data items is a commonly used mechanism for improving the performance and fault-tolerance of any distributed system. By placing copies of data items closer to their requesters, the response time of queries can be improved. An additional reason for replication is load balancing. For instance, by allocating many copies to popular data items, the query load can be evenly distributed among the servers that hold these copies. Similarly, by eliminating hotspots, replication can lead to a better distribution of the communication load over the network links. Besides performance-related reasons, replication improves system availability, since the larger the number of copies of an item, the more site failures can be tolerated. In this chapter we survey replication methods applicable to p2p systems. Although there exist some general techniques, methodologies are distinguished according to the overlay organization (structured and unstructured) they are aimed at. After replicas are created and distributed, a major issue is their maintenance. We present strategies that have been proposed for keeping replicas up to date so as to achieve a desired level of consistency.


Sign in / Sign up

Export Citation Format

Share Document