scholarly journals Authentication Based Cloud Storage and Secure Data Forwarding

2013 ◽  
Vol 4 (1) ◽  
pp. 106-110
Author(s):  
Rajasekaran S ◽  
Kalifulla. Y ◽  
Murugesan. S ◽  
Ezhilvendan. M ◽  
Gunasekaran. J

cloud storage system, consisting of a collection of storage servers, provides long-term storage services over the Internet.  Storing data in a third party’s cloud system causes serious concern over data confidentiality. General encryption schemes protect data confidentiality, but also limit the functionality of the storage system because a few operations are supported over encrypted data. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed and has no central authority. We propose a threshold proxy re-encryption scheme and integrate it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back. The main technical contribution is that the proxy re-encryption scheme supports encoding operations over encrypted messages as well as forwarding operations over encoded and encrypted messages. Our method fully integrates encrypting, encoding, and forwarding. We analyze and suggest suitable parameters for the number of copies of a message dispatched to storage servers and the number of storage servers queried by a key server. These parameters allow more flexible adjustment between the number of storage servers and robustness.

2013 ◽  
Vol 10 (8) ◽  
pp. 1905-1912 ◽  
Author(s):  
P Radha Krishna Reddy ◽  
S Sivaramaiah ◽  
U Sesadri

The cloud storage system is a model consists of networked online collection of storage servers that provides long-term storage services over the Internet hosted by the third parties. Storing data in third party’s cloud system creates serious problems over data confidentiality & authorization. The normal encryption schemes may protect data confidentiality from unauthorized users, but these techniques are limited based on their functionality because only few operations are supported over encrypted data.It’s a challenging task to construct secure storage system with multiple functionalities, if the storage system is distributed. In this paper we developed a secure distributed storage system by using (UMIB-PRE) Unidirectional and Multiuse Identity based proxy re encryption technique with decentralized erasure code. The main theme of this UMIB proxy re encryption is to support encoding, storing and forwarding operations over encrypted data. Our method full supports encryption, decryption, encoding and forwarding techniques. We also suggest possible parameters for these key servers and storage servers as well. These parameters will give robustness to storage servers.


2016 ◽  
Vol 4 (1) ◽  
Author(s):  
Agus Maman Abadi ◽  
Musthofa Musthofa ◽  
Emut Emut

The increasing need in techniques of storing big data presents a new challenge. One way to address this challenge is the use of distributed storage systems. One strategy that implemented in distributed data storage systems is the use of Erasure Code which applied to network coding. The code used in this technique is based on the algebraic structure which is called as vector space. Some studies have also been carried out to create code that is based on other algebraic structures such as module.  In this study, we are going to try to set up a code based on the algebraic structure which is a generalization of the module that is semimodule by utilizing the max operations and sum operations at max plus algebra. The results of this study indicate that the max operation and the addition operation on max plus algebra cannot be used to establish a semimodule code, but by modifying the operation "+" as "min", we get a code based on semimodule. Keywords:   code, distributed storage systems, network coding, semimodule, max plus algebra


Author(s):  
Harihara Subramanyam G. ◽  
Balaji Janakiram ◽  
M. Girish Chandra ◽  
Aravind K.G. ◽  
Swanand Kadhe ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Miao Ye ◽  
Ruoyu Wei ◽  
Wei Guo ◽  
Qiuxiang Jiang ◽  
Hongbing Qiu ◽  
...  

As a storage method for a distributed storage system, an erasure code can save storage space and repair the data of failed nodes. However, most studies that discuss the repair of fault nodes in the erasure code mode only focus on the condition that the bandwidth of heterogeneous links restricts the repair rate but ignore the condition that the storage node is heterogeneous, the cost of repair traffic in the repair process, and the influence of the failure of secondary nodes on the repair process. An optimal repair strategy based on the minimum storage regenerative (MSR) code and a hybrid genetic algorithm is proposed for single-node fault scenarios to solve the above problems. In this work, the single-node data repair problem is modeled as an optimization problem of an optimal Steiner tree with constraints considering heterogeneous link bandwidth and heterogeneous node processing capacity and takes repair traffic and repair delay as optimization objectives. After that, a hybrid genetic algorithm is designed to solve the problem. The experimental results show that under the same scales used in the MSR code cases, our approach has good robustness and its repair delay decreases by 10% and 55% compared with the conventional tree repair topology and star repair topology, respectively; the repair flow increases by 10% compared with the star topology, and the flow rate of the conventional tree repair topology decreases by 40%.


Author(s):  
G.CHINNA PULLAIAH ◽  
DILIP VENKATA KUMAR VENGALA

Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a threshold proxy re-encryption, on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public Audit ability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design A decentralized erasure code is an erasure code that independently computes each code word symbol for a message, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.


Computers ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 23
Author(s):  
Laskhmi Siva Rama Krishna Talluri ◽  
Ragunathan Thirumalaisamy ◽  
Ramgopal Kota ◽  
Ram Prasad Reddy Sadi ◽  
Ujjwal KC ◽  
...  

In cloud storage systems, users must be able to shut down the application when not in use and restart it from the last consistent state when required. BlobSeer is a data storage application, specially designed for distributed systems, that was built as an alternative solution for the existing popular open-source storage system-Hadoop Distributed File System (HDFS). In a cloud model, all the components need to stop and restart from a consistent state when the user requires it. One of the limitations of BlobSeer DFS is the possibility of data loss when the system restarts. As such, it is important to provide a consistent start and stop state to BlobSeer components when used in a Cloud environment to prevent any data loss. In this paper, we investigate the possibility of BlobSeer providing a consistent state distributed data storage system with the integration of checkpointing restart functionality. To demonstrate the availability of a consistent state, we set up a cluster with multiple machines and deploy BlobSeer entities with checkpointing functionality on various machines. We consider uncoordinated checkpoint algorithms for their associated benefits over other alternatives while integrating the functionality to various BlobSeer components such as the Version Manager (VM) and the Data Provider. The experimental results show that with the integration of the checkpointing functionality, a consistent state can be ensured for a distributed storage system even when the system restarts, preventing any possible data loss after the system has encountered various system errors and failures.


Sign in / Sign up

Export Citation Format

Share Document