Cloud Data Center Based Dynamic Optimizing Replica Migration

2019 ◽  
Vol 16 (2) ◽  
pp. 576-579
Author(s):  
T. Narmadha ◽  
J. Gowrishankar ◽  
M. Ramkumar ◽  
K. Vengatesan

Cloud Storage Providers (CSPs) over topographically scattered data stores furnishing a few storage classes with various costs. An imperative issue looked by application providers is the means by which to misuse value contrasts crosswise over data stores to limit the financial cost of utilizations that incorporate problem area objects that are acquired much of the time and spot objects that are regularly becoming to far less. This financial cost comprises of reproduction, creation, storage, Put, Get, and potential migration costs. To advance such costs, we initially propose the ideal arrangement that uses dynamic and direct programming methods with the supposition that the workload on objects is known ahead of time. We likewise star represents a lightweight heuristic arrangement, motivated from a rough algorithm for the Set Covering Problem, which does not make any presumption on the object workload. This arrangement together decides object imitations area, object reproductions migration times, and redirection of Get (read) solicitations to object copies with the goal that the fiscal cost of data storage administration is improved while the client saw idleness is fulfilled.

2015 ◽  
Vol 764-765 ◽  
pp. 775-778
Author(s):  
Shang Liang Chen ◽  
Ying Han Hsiao ◽  
Yun Yao Chen ◽  
You Chen Lin

This study proposes innovative multi-tenant remote monitoring platform architecture based on cloud with an injection machine manufacturer as cloud data center. This study was designed to develop machine connection mechanism and monitoring module, etc. with machine manufacturers. Under the architecture of this study, machine manufacturers can provide virtualization technology based remote monitoring systems for manufacturing buyers to rapidly develop custom monitoring software. All data storage devices such as servers are provided by the machine manufacturer, and the client side can effectively manage injection machine data by simply renting virtual machine space.


2014 ◽  
Vol 556-562 ◽  
pp. 5395-5399
Author(s):  
Jian Hong Zhang ◽  
Wen Jing Tang

Data integrity is one of the biggest concerns with cloud data storage for cloud user. Besides, the cloud user’s constrained computing capabilities make the task of data integrity auditing expensive and even formidable. Recently, a proof-of-retrievability scheme proposed by Yuan et al. has addressed the issue, and security proof of the scheme was provided. Unfortunately, in this work we show that the scheme is insecure. Namely, the cloud server who maliciously modifies the data file can pass the verification, and the client who executes the cloud storage auditing can recover the whole data file through the interactive process. Furthermore, we also show that the protocol is vulnerable to an efficient active attack, which means that the active attacker is able to arbitrarily modify the cloud data without being detected by the auditor in the auditing process. After giving the corresponding attacks to Yuan et al.’s scheme, we suggest a solution to fix the problems.


2019 ◽  
Vol 8 (2) ◽  
pp. 5266-5270

With the wide task of massive scale Internet affiliations and epic information, the cloud has changed into the perfect condition to fulfill the continually making storage request, by flawlessness of its indisputably monster inspiration driving restriction, high accessibility and snappier access time. In this novel condition, information replication has been touted as a total reaction for improve information accessibility and decreasing access time. Regardless, replica plan structures for the most part need to move and make vast information replicas after some time between and inside server farms, understanding a gigantic overhead to the degree system weight and accessibility. Cloud Storage Providers (CSPs) offer geographically information stores giving a couple of storage classes various costs. A colossal issue looking by cloud clients is the course by which to Abuse these storage instructions to serve an software program with a length switching fantastic development preserving up be executed on its subjects at any price price. This rate incorporates personal price (i.e., storage, Put and Get costs) and capacity migration price (i.E., kind out fee). To cope with this issue, we at the beginning suggest the best isolates calculation that usage dynamic and direct programming techniques with the supposition of on hand careful records of remarkable weight on articles. To enhance such charges, we on the start suggest the appropriate affiliation that use dynamic and direct programming structures with the supposition that the the relaxation of the burden on articles is notion early. We aside from advise a light-weight heuristic framework, brought on from a mistook estimation for the Set Covering Problem, which does no longer make any supposition at the article staying rule artwork. This device collectively alternatives object replicas region, object replicas migration times, and redirection of Get (read) sport plans to cope with replicas with the goal that the money related fee of facts storage the heap up is revived even as the consumer noticed nation of no interest is fulfilled. We chart the practicality of the proposed light-weight calculation regarding cost spare resources by techniques for wide redirections utilizing Clouds test system and searches for after from Twitter.


Cloud storage service is one of the vital function of cloud computing that helps cloud users to outsource a massive volume of data without upgrading their devices. However, cloud data storage offered by Cloud Service Providers (CSPs) faces data redundancy problems. The data de-duplication technique aims to eliminate redundant data segments and keeps a single instance of the data set, even if similar data set is owned by any number of users. Since data blocks are distributed among the multiple individual servers, the user needs to download each block of the file before reconstructing the file, which reduces the system efficiency. We propose a server level data recover module in the cloud storage system to improve file access efficiency and reduce network bandwidth utilization time. In the proposed method, erasure coding is used to store blocks in distributed cloud storage and The MD5 (Message Digest 5) is used for data integrity. Executing recover algorithm helps user to directly fetch the file without downloading each block from the cloud servers. The proposed scheme improves the time efficiency of the system and quick access ability to the stored data. Thus consumes less network bandwidth and reduces user processing overhead while data file is downloading.


2013 ◽  
Vol 756-759 ◽  
pp. 1275-1279
Author(s):  
Lin Na Huang ◽  
Feng Hua Liu

Cloud storage of high performance is the basic condition for cloud computing. This article introduces the concept and advantage of cloud storage, discusses the infrastructure of cloud storage system as well as the architecture of cloud data storage, researches the details about the design of Distributed File System within cloud data storage, at the same time, puts forward different developing strategies for the enterprises according to the different roles that the enterprises are acting as during the developing process of cloud computing.


2013 ◽  
Vol 717 ◽  
pp. 677-687
Author(s):  
Sultan Ullah ◽  
Xue Feng Zheng ◽  
Feng Zhou

For decades the researchers were trying to provide computing as a utility, where the consumer will enjoy the on demand applications and resources from a collection of shared computing resources. This dream comes true in the shape of cloud computing, and the owners of the data now outsource their data to cloud data center. It not only relieves the owner from the extra pain of local storage and maintenance but it allows accessing the data anywhere, anytime on demand. Outsourcing data eliminates the control on the physical computing resources for the owner, and it also creates a chaos among the customer regarding the confidentiality and integrity of their data. The security issues can be handled in numerous ways, i.e efficient access control and authorization mechanism etc. The reliability or accuracy of data is a problem, where the unauthorized changes made to the data without the permission and knowledge of the owner of the data and it is an important component of cloud computing to provide an efficient and secure data storage. Therefore to facilitate the swift employment of data storage service and recapture safety measures for the outsourced data, we propose a set of operational algorithms to resolve the challenge of secure data storage and make the cloud storage service a reality.


Author(s):  
D. Priyadarshini Et.al

Multiple corporations and people frequently launching their data in the cloud environment. With the huge growth of data mining and the cloud storage paradigm without checking protection policies and procedures that can pose a great risk to their sector. The data backup in the cloud storage would not only be problematic for the cloud user but also the Cloud Service Provider (CSP). The unencrypted handling of confidential data is likely to make access simpler for unauthorized individuals and also by the CSP. Normal encryption algorithms need more primitive computing, space and costs for storage. It is also of utmost importance to secure cloud data with limited measurement and storage capacity. Till now, different methods and frameworks to maintain a degree of protection that meets the requirements of modern life have been created. Within those systems, Intrusion Detection Systems (IDS) appear to find suspicious actions or events which are vulnerable to a system's proper activity. Today, because of the intermittent rise in network traffic, the IDS face problems for detecting attacks in broad streams of links. In existing the Two-Stage Ensemble Classifier for IDS (TSE-IDS) had been implemented. For detecting trends on big data, the irrelevant data characteristics appear to decrease both the velocity of attack detection and accuracy. The computing resource available for training and testing of the IDS models is also increased. We have put forward a novel strategy in this research paper to the above issues to improve the balance of the server load effectively with protected user allocation to a server, and thereby minimize resource complexity on the cloud data storage device, by integrating the Authentication based User-Allocation with Merkle based Hashing-Tree (AUA-MHT) technique. Through this, the authentication attack and flood attack are detected and restrict unauthorized users. By this proposed model the cloud server verifies, by resolving such attacks, that only approved users are accessing the cloud info. The proposed framework AUA-MHT performs better than the existing model TSE-IDS for parameters such as User Allocation Rate, Intrusion Detection Rate and Space Complexity


2021 ◽  
Vol 13 (11) ◽  
pp. 279
Author(s):  
Siti Dhalila Mohd Satar ◽  
Masnida Hussin ◽  
Zurina Mohd Hanapi ◽  
Mohamad Afendee Mohamed

Managing and controlling access to the tremendous data in Cloud storage is very challenging. Due to various entities engaged in the Cloud environment, there is a high possibility of data tampering. Cloud encryption is being employed to control data access while securing Cloud data. The encrypted data are sent to Cloud storage with an access policy defined by the data owner. Only authorized users can decrypt the encrypted data. However, the access policy of the encrypted data is in readable form, which results in privacy leakage. To address this issue, we proposed a reinforcement hiding in access policy over Cloud storage by enhancing the Ciphertext Policy Attribute-based Encryption (CP-ABE) algorithm. Besides the encryption process, the reinforced CP-ABE used logical connective operations to hide the attribute value of data in the access policy. These attributes were converted into scrambled data along with a ciphertext form that provides a better unreadability feature. It means that a two-level concealed tactic is employed to secure data from any unauthorized access during a data transaction. Experimental results revealed that our reinforced CP-ABE had a low computational overhead and consumed low storage costs. Furthermore, a case study on security analysis shows that our approach is secure against a passive attack such as traffic analysis.


2014 ◽  
Vol 626 ◽  
pp. 26-31 ◽  
Author(s):  
Jose G. Sahaya Stalin ◽  
Christopher C. Seldev

Cloud data centers should be flexible and available to the data forever. The replication method is used to achieve high availability and durability of cloud data center, if there is any failure to recover the messages from the cloud databases. The concern of this replication technology is that, the replica size is equal to the size of the original data object. When Error Detection Schemes were used, there is a reduction in the number of cloud distributed storage systems. The scope of this paper is to store the data efficiently in cloud data centers unlike the previous schemes which used erasure codes such as Reed Solomon codes only with a view to store data in datacenters. This paper proposes to encrypt the message using DES and to encode the message using Reed Solomon code before storing the message. Storing time is convincingly good in Reed Solomon code when compared with tornado code.


Sign in / Sign up

Export Citation Format

Share Document