scholarly journals Providing Consistent State to Distributed Storage System

Computers ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 23
Author(s):  
Laskhmi Siva Rama Krishna Talluri ◽  
Ragunathan Thirumalaisamy ◽  
Ramgopal Kota ◽  
Ram Prasad Reddy Sadi ◽  
Ujjwal KC ◽  
...  

In cloud storage systems, users must be able to shut down the application when not in use and restart it from the last consistent state when required. BlobSeer is a data storage application, specially designed for distributed systems, that was built as an alternative solution for the existing popular open-source storage system-Hadoop Distributed File System (HDFS). In a cloud model, all the components need to stop and restart from a consistent state when the user requires it. One of the limitations of BlobSeer DFS is the possibility of data loss when the system restarts. As such, it is important to provide a consistent start and stop state to BlobSeer components when used in a Cloud environment to prevent any data loss. In this paper, we investigate the possibility of BlobSeer providing a consistent state distributed data storage system with the integration of checkpointing restart functionality. To demonstrate the availability of a consistent state, we set up a cluster with multiple machines and deploy BlobSeer entities with checkpointing functionality on various machines. We consider uncoordinated checkpoint algorithms for their associated benefits over other alternatives while integrating the functionality to various BlobSeer components such as the Version Manager (VM) and the Data Provider. The experimental results show that with the integration of the checkpointing functionality, a consistent state can be ensured for a distributed storage system even when the system restarts, preventing any possible data loss after the system has encountered various system errors and failures.

2016 ◽  
Vol 4 (1) ◽  
Author(s):  
Agus Maman Abadi ◽  
Musthofa Musthofa ◽  
Emut Emut

The increasing need in techniques of storing big data presents a new challenge. One way to address this challenge is the use of distributed storage systems. One strategy that implemented in distributed data storage systems is the use of Erasure Code which applied to network coding. The code used in this technique is based on the algebraic structure which is called as vector space. Some studies have also been carried out to create code that is based on other algebraic structures such as module.  In this study, we are going to try to set up a code based on the algebraic structure which is a generalization of the module that is semimodule by utilizing the max operations and sum operations at max plus algebra. The results of this study indicate that the max operation and the addition operation on max plus algebra cannot be used to establish a semimodule code, but by modifying the operation "+" as "min", we get a code based on semimodule. Keywords:   code, distributed storage systems, network coding, semimodule, max plus algebra


2017 ◽  
Vol 5 (1) ◽  
pp. 60
Author(s):  
Agus Maman Abadi ◽  
Karyati Karyati ◽  
Musthofa Musthofa ◽  
Emut Emut

Abstract The Increasing need of storing large amounts of data presents a new challenge. One way to address this challenge is to use distributed data storage system. One of the strategies implemented in the distributed data storage system is using the technique of regenerating code. The code used in this technique is based on the algebraic structure of fields. Some studies have also been carried out to create code that is based on the other algebraic structure namely module. In this study, we attempted to assess the implementation of the code module at regenerating technique code. The study showed there is a potential properties code based on module that can be used in regenerating code technique. Keywords: Distributed storage, regenerating code technique, module code


2011 ◽  
Vol 19 (1) ◽  
pp. 27-43
Author(s):  
Tevfik Kosar ◽  
Ismail Akturk ◽  
Mehmet Balman ◽  
Xinqi Wang

Modern collaborative science has placed increasing burden on data management infrastructure to handle the increasingly large data archives generated. Beside functionality, reliability and availability are also key factors in delivering a data management system that can efficiently and effectively meet the challenges posed and compounded by the unbounded increase in the size of data generated by scientific applications. We have developed a reliable and efficient distributed data storage system, PetaShare, which spans multiple institutions across the state of Louisiana. At the back-end, PetaShare provides a unified name space and efficient data movement across geographically distributed storage sites. At the front-end, it provides light-weight clients the enable easy, transparent and scalable access. In PetaShare, we have designed and implemented an asynchronously replicated multi-master metadata system for enhanced reliability and availability, and an advanced buffering system for improved data transfer performance. In this paper, we present the details of our design and implementation, show performance results, and describe our experience in developing a reliable and efficient distributed data management system for data-intensive science.


2021 ◽  
Vol 251 ◽  
pp. 02035
Author(s):  
Adrian Eduard Negru ◽  
Latchezar Betev ◽  
Mihai Carabaș ◽  
Costin Grigoraș ◽  
Nicolae Țăpuş ◽  
...  

CERN uses the world’s largest scientific computing grid, WLCG, for distributed data storage and processing. Monitoring of the CPU and storage resources is an important and essential element to detect operational issues in its systems, for example in the storage elements, and to ensure their proper and efficient function. The processing of experiment data depends strongly on the data access quality, as well as its integrity and both of these key parameters must be assured for the data lifetime. Given the substantial amount of data, O(200 PB), already collected by ALICE and kept at various storage elements around the globe, scanning every single data chunk would be a very expensive process, both in terms of computing resources usage and in terms of execution time. In this paper, we describe a distributed file crawler that addresses these natural limits by periodically extracting and analyzing statistically significant samples of files from storage elements, evaluates the results and is integrated with the existing monitoring solution, MonALISA.


Author(s):  
Ismail Akturk ◽  
Xinqi Wang ◽  
Tevfik Kosar

The unbounded increase in the size of data generated by scientific applications necessitates collaboration and sharing among the nation’s education and research institutions. Simply purchasing high-capacity, high-performance storage systems and adding them to the existing infrastructure of the collaborating institutions does not solve the underlying and highly challenging data handling problem. Scientists are compelled to spend a great deal of time and energy on solving basic data-handling issues, such as the physical location of data, how to access it, and/or how to move it to visualization and/or compute resources for further analysis. This chapter presents the design and implementation of a reliable and efficient distributed data storage system, PetaShare, which spans multiple institutions across the state of Louisiana. At the back-end, PetaShare provides a unified name space and efficient data movement across geographically distributed storage sites. At the front-end, it provides light-weight clients the enable easy, transparent, and scalable access. In PetaShare, the authors have designed and implemented an asynchronously replicated multi-master metadata system for enhanced reliability and availability. The authors also present a high level cross-domain metadata schema to provide a structured systematic view of multiple science domains supported by PetaShare.


Computers ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 142
Author(s):  
Obadah Hammoud ◽  
Ivan Tarkhanov ◽  
Artyom Kosmarski

This paper investigates the problem of distributed storage of electronic documents (both metadata and files) in decentralized blockchain-based b2b systems (DApps). The need to reduce the cost of implementing such systems and the insufficient elaboration of the issue of storing big data in DLT are considered. An approach for building such systems is proposed, which allows optimizing the size of the required storage (by using Erasure coding) and simultaneously providing secure data storage in geographically distributed systems of a company, or within a consortium of companies. The novelty of this solution is that we are the first who combine enterprise DLT with distributed file storage, in which the availability of files is controlled. The results of our experiment demonstrate that the speed of the described DApp is comparable to known b2c torrent projects, and subsequently justify the choice of Hyperledger Fabric and Ethereum Enterprise for its use. Obtained test results show that public blockchain networks are not suitable for creating such a b2b system. The proposed system solves the main challenges of distributed data storage by grouping data into clusters and managing them with a load balancer, while preventing data tempering using a blockchain network. The considered DApps storage methodology easily scales horizontally in terms of distributed file storage and can be deployed on cloud computing technologies, while minimizing the required storage space. We compare this approach with known methods of file storage in distributed systems, including central storage, torrents, IPFS, and Storj. The reliability of this approach is calculated and the result is compared to traditional solutions based on full backup.


2014 ◽  
Vol 687-691 ◽  
pp. 2710-2713
Author(s):  
Jing Yang

With the rapid development of computer technology and network technology, mass data store distributed and management pattern already received accepted extensively. Thus it can be seen malpractice obviously, data storage structure, storage environment are different and other problems such as data handing. The paper go into how improve data storage performance in the distributed environment, analysis the data storage technology at present and data storage performance in the distributed environment, summarize the claim of distributed storage database design, provide the theory in vacation distributed data storage performance standardization.


Author(s):  
Pooya Hejazi ◽  
Gianluigi Ferrari

Load balancing, energy efficiency and fault tolerance are among the most important data dissemination issues in Wireless Sensor Networks (WSNs). In order to successfully cope with the mentioned issues, two main approaches (namely, Data-centric Storage and Distributed Data Storage) have been proposed in the literature. Both approaches suffer from data loss due to memory and/or energy depletion in the storage nodes. Even though several techniques have been proposed so far to overcome the mentioned problems, the proposed solutions typically focus on one issue at a time. In this paper, we integrate the Data-centric Storage (DCS) features into Distributed Data Storage (DDS) mechanisms and present a novel approach, denoted as Collaborative Memory and Energy Management (CoMEM), to overcome both problems and bring memory and energy efficiency to the data loss mechanism of WSNs. We also propose analytical and simulation frameworks for performance evaluation. Our results show that the proposed method outperforms existing approaches in various WSN scenarios.


Author(s):  
Igor Boyarshin ◽  
Anna Doroshenko ◽  
Pavlo Rehida

The article describes a new method of improving efficiency of the systems that deal with storage and providing access of shared data of many users by utilizing replication. Existing methods of load balancing in data storage systems are described, namely RR and WRR. A new method of request balancing among multiple data storage nodes is proposed, that is able to adjust to input request stream intensity in real time and utilize disk space efficiently while doing so.


Sign in / Sign up

Export Citation Format

Share Document