scholarly journals An Efficient Placement Algorithm for Data Replication and To Improve System Availability in Cloud Environment

Author(s):  
Sasikumar Karuppusamy ◽  
◽  
Madiajagan Muthaiyan ◽  
Author(s):  
K. Sasikumar ◽  
B. Vijayakumar

In this paper, we performed a comparative study of the different data replication strategies such as Adaptive Data Replication Strategy (ADRS), Dynamic Cost Aware Re-Replication and Rebalancing Strategy (DCR2S) and Efficient Placement Algorithm (EPA) in the cloud environment. The implementation of these three techniques is done in JAVA and the performance analysis is conducted to study the performance of those replication techniques by various parameters. The parameters used for the performance analysis of these three techniques are Load Variance, Response Time, Probability of File Availability, System Byte Effective Rate (SBER), Latency, and Fault Ratio. From the analysis, it is evaluated that by varying the number of file replicas, it shows deviations in the outcomes of these parameters. The comparative results were also analyzed.


Author(s):  
Umesh Banodha ◽  
Praveen Kumar Kataria

Cloud is an emerging technology that stores the necessary data and electronic form of data is produced in gigantic quantity. It is vital to maintain the efficacy of this data the need of data recovery services is highly essential. Cloud computing is anticipated as the vital foundation for the creation of IT enterprise and it is an impeccable solution to move databases and application software to big data centers where managing data and services is not completely reliable. Our focus will be on the cloud data storage security which is a vital feature when it comes to giving quality service. It should also be noted that cloud environment comprises of extremely dynamic and heterogeneous environment and because of high scale physical data and resources, the failure of data centre nodes is completely normal.Therefore, cloud environment needs effective adaptive management of data replication to handle the indispensable characteristic of the cloud environment. Disaster recovery using cloud resources is an attractive approach and data replication strategy which attentively helps to choose the data files for replication and the strategy proposed tells dynamically about the number of replicas and effective data nodes for replication. Thus, the objective of future algorithm is useful to help users together the information from a remote location where network connectivity is absent and secondly to recover files in case it gets deleted or wrecked because of any reason. Even, time oriented problems are getting resolved so in less time recovery process is executed.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 40240-40254
Author(s):  
Ahmed Awad ◽  
Rashed Salem ◽  
Hatem Abdelkader ◽  
Mustafa Abdul Salam

Author(s):  
Oshin Sharma ◽  
Hemraj Saini

To increase the availability of the resources and simultaneously to reduce the energy consumption of data centers by providing a good level of the service are one of the major challenges in the cloud environment. With the increasing data centers and their size around the world, the focus of the current research is to save the consumption of energy inside data centers. Thus, this article presents an energy-efficient VM placement algorithm for the mapping of virtual machines over physical machines. The idea of the mapping of virtual machines over physical machines is to lessen the count of physical machines used inside the data center. In the proposed algorithm, the problem of VM placement is formulated using a non-dominated sorting genetic algorithm based multi-objective optimization. The objectives are: optimization of the energy consumption, reduction of the level of SLA violation and the minimization of the migration count.


2019 ◽  
Vol 37 (6) ◽  
pp. 970-983 ◽  
Author(s):  
Zongda Wu ◽  
Jian Xie ◽  
Xinze Lian ◽  
Jun Pan

Purpose The security of archival privacy data in the cloud has become the main obstacle to the application of cloud computing in archives management. To this end, aiming at XML archives, this paper aims to present a privacy protection approach that can ensure the security of privacy data in the untrusted cloud, without compromising the system availability. Design/methodology/approach The basic idea of the approach is as follows. First, the privacy data before being submitted to the cloud should be strictly encrypted on a trusted client to ensure the security. Then, to query the encrypted data efficiently, the approach constructs some key feature data for the encrypted data, so that each XML query defined on the privacy data can be executed correctly in the cloud. Findings Finally, both theoretical analysis and experimental evaluation demonstrate the overall performance of the approach in terms of security, efficiency and accuracy. Originality/value This paper presents a valuable study attempting to protect privacy for the management of XML archives in a cloud environment, so it has a positive significance to promote the application of cloud computing in a digital archive system.


Author(s):  
Ahmad Shukri Mohd Noor ◽  
Nur Farhah Mat Zian ◽  
Noor Hafhizah Abd Rahim ◽  
Rabiei Mamat ◽  
Wan Nur Amira Wan Azman

The availability of the data in a distributed system can be increase by implementing fault tolerance mechanism in the system. Reactive method in fault tolerance mechanism deals with restarting the failed services, placing redundant copies of data in multiple nodes across network, in other words data replication and migrating the data for recovery. Even if the idea of data replication is solid, the challenge is to choose the right replication technique that able to provide better data availability as well as consistency that involves read and write operations on the redundant copies. Circular Neighboring Replication (CNR) technique exploits neighboring policy in replicating the data items in the system performs well with regards to lower copies needed to maintain the system availability at the highest. In a performance analysis with existing techniques, results show that CNR improves system availability by average 37% by offering only two replicas needed to maintain data availability and consistency. The study demonstrates the possibility of the proposed technique and the potential of deploying in larger and complex environment.


2015 ◽  
Vol 4 (1) ◽  
pp. 163 ◽  
Author(s):  
Alireza Saleh ◽  
Reza Javidan ◽  
Mohammad Taghi FatehiKhajeh

<p>Nowadays, scientific applications generate a huge amount of data in terabytes or petabytes. Data grids currently proposed solutions to large scale data management problems including efficient file transfer and replication. Data is typically replicated in a Data Grid to improve the job response time and data availability. A reasonable number and right locations for replicas has become a challenge in the Data Grid. In this paper, a four-phase dynamic data replication algorithm based on Temporal and Geographical locality is proposed. It includes: 1) evaluating and identifying the popular data and triggering a replication operation when the popularity data passes a dynamic threshold; 2) analyzing and modeling the relationship between system availability and the number of replicas, and calculating a suitable number of new replicas; 3) evaluating and identifying the popular data in each site, and placing replicas among them; 4) removing files with least cost of average access time when encountering insufficient space for replication. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid Projects. The simulation results show that the proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage and percentage of storage filled.</p>


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Xiong Fu ◽  
Wenjie Liu ◽  
Yeliang Cang ◽  
Xiaojie Gong ◽  
Song Deng

Cloud storage has become an important part of a cloud system nowadays. Most current cloud storage systems perform well for large files but they cannot manage small file storage appropriately. With the development of cloud services, more and more small files are emerging. Therefore, we propose an optimized data replication approach for small files in cloud storage systems. A small file merging algorithm and a block replica placement algorithm are involved in this approach. Small files are classified into four types according to their access frequencies. A number of small files will be merged into the same block based on which type they belong to. And the replica placement algorithm helps to improve the access efficiencies of small files in a cloud system. Related experiment results demonstrate that our proposed approach can effectively shorten the time spent reading and writing small files, and it performs better than the other two already known data replication algorithms: HAR and SequenceFile.


Sign in / Sign up

Export Citation Format

Share Document