Research on Distributed Storage Technology Based on Mass Data

2014 ◽  
Vol 687-691 ◽  
pp. 2710-2713
Author(s):  
Jing Yang

With the rapid development of computer technology and network technology, mass data store distributed and management pattern already received accepted extensively. Thus it can be seen malpractice obviously, data storage structure, storage environment are different and other problems such as data handing. The paper go into how improve data storage performance in the distributed environment, analysis the data storage technology at present and data storage performance in the distributed environment, summarize the claim of distributed storage database design, provide the theory in vacation distributed data storage performance standardization.

Computers ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 142
Author(s):  
Obadah Hammoud ◽  
Ivan Tarkhanov ◽  
Artyom Kosmarski

This paper investigates the problem of distributed storage of electronic documents (both metadata and files) in decentralized blockchain-based b2b systems (DApps). The need to reduce the cost of implementing such systems and the insufficient elaboration of the issue of storing big data in DLT are considered. An approach for building such systems is proposed, which allows optimizing the size of the required storage (by using Erasure coding) and simultaneously providing secure data storage in geographically distributed systems of a company, or within a consortium of companies. The novelty of this solution is that we are the first who combine enterprise DLT with distributed file storage, in which the availability of files is controlled. The results of our experiment demonstrate that the speed of the described DApp is comparable to known b2c torrent projects, and subsequently justify the choice of Hyperledger Fabric and Ethereum Enterprise for its use. Obtained test results show that public blockchain networks are not suitable for creating such a b2b system. The proposed system solves the main challenges of distributed data storage by grouping data into clusters and managing them with a load balancer, while preventing data tempering using a blockchain network. The considered DApps storage methodology easily scales horizontally in terms of distributed file storage and can be deployed on cloud computing technologies, while minimizing the required storage space. We compare this approach with known methods of file storage in distributed systems, including central storage, torrents, IPFS, and Storj. The reliability of this approach is calculated and the result is compared to traditional solutions based on full backup.


2017 ◽  
Vol 5 (1) ◽  
pp. 60
Author(s):  
Agus Maman Abadi ◽  
Karyati Karyati ◽  
Musthofa Musthofa ◽  
Emut Emut

Abstract The Increasing need of storing large amounts of data presents a new challenge. One way to address this challenge is to use distributed data storage system. One of the strategies implemented in the distributed data storage system is using the technique of regenerating code. The code used in this technique is based on the algebraic structure of fields. Some studies have also been carried out to create code that is based on the other algebraic structure namely module. In this study, we attempted to assess the implementation of the code module at regenerating technique code. The study showed there is a potential properties code based on module that can be used in regenerating code technique. Keywords: Distributed storage, regenerating code technique, module code


2011 ◽  
Vol 19 (1) ◽  
pp. 27-43
Author(s):  
Tevfik Kosar ◽  
Ismail Akturk ◽  
Mehmet Balman ◽  
Xinqi Wang

Modern collaborative science has placed increasing burden on data management infrastructure to handle the increasingly large data archives generated. Beside functionality, reliability and availability are also key factors in delivering a data management system that can efficiently and effectively meet the challenges posed and compounded by the unbounded increase in the size of data generated by scientific applications. We have developed a reliable and efficient distributed data storage system, PetaShare, which spans multiple institutions across the state of Louisiana. At the back-end, PetaShare provides a unified name space and efficient data movement across geographically distributed storage sites. At the front-end, it provides light-weight clients the enable easy, transparent and scalable access. In PetaShare, we have designed and implemented an asynchronously replicated multi-master metadata system for enhanced reliability and availability, and an advanced buffering system for improved data transfer performance. In this paper, we present the details of our design and implementation, show performance results, and describe our experience in developing a reliable and efficient distributed data management system for data-intensive science.


2021 ◽  
Vol 251 ◽  
pp. 02035
Author(s):  
Adrian Eduard Negru ◽  
Latchezar Betev ◽  
Mihai Carabaș ◽  
Costin Grigoraș ◽  
Nicolae Țăpuş ◽  
...  

CERN uses the world’s largest scientific computing grid, WLCG, for distributed data storage and processing. Monitoring of the CPU and storage resources is an important and essential element to detect operational issues in its systems, for example in the storage elements, and to ensure their proper and efficient function. The processing of experiment data depends strongly on the data access quality, as well as its integrity and both of these key parameters must be assured for the data lifetime. Given the substantial amount of data, O(200 PB), already collected by ALICE and kept at various storage elements around the globe, scanning every single data chunk would be a very expensive process, both in terms of computing resources usage and in terms of execution time. In this paper, we describe a distributed file crawler that addresses these natural limits by periodically extracting and analyzing statistically significant samples of files from storage elements, evaluates the results and is integrated with the existing monitoring solution, MonALISA.


Author(s):  
Ismail Akturk ◽  
Xinqi Wang ◽  
Tevfik Kosar

The unbounded increase in the size of data generated by scientific applications necessitates collaboration and sharing among the nation’s education and research institutions. Simply purchasing high-capacity, high-performance storage systems and adding them to the existing infrastructure of the collaborating institutions does not solve the underlying and highly challenging data handling problem. Scientists are compelled to spend a great deal of time and energy on solving basic data-handling issues, such as the physical location of data, how to access it, and/or how to move it to visualization and/or compute resources for further analysis. This chapter presents the design and implementation of a reliable and efficient distributed data storage system, PetaShare, which spans multiple institutions across the state of Louisiana. At the back-end, PetaShare provides a unified name space and efficient data movement across geographically distributed storage sites. At the front-end, it provides light-weight clients the enable easy, transparent, and scalable access. In PetaShare, the authors have designed and implemented an asynchronously replicated multi-master metadata system for enhanced reliability and availability. The authors also present a high level cross-domain metadata schema to provide a structured systematic view of multiple science domains supported by PetaShare.


2019 ◽  
Vol 207 ◽  
pp. 08003
Author(s):  
Alexander Kryukov ◽  
Minh-Duc Nguyen

In this paper we present the architecture of a distributed data storage for astroparticle physics. The main advantage of the proposed architecture is the possibility to extract data on both file and event level for further processing and analysis. The storage also provides users with a special service allowing to aggregate data from different storages into a single sample. This feature permits to apply multi-messenger methods for more sophisticated investigation of the data. Users can use both Webinterface and Application Programming Interface (API) for accessing the storage.


Author(s):  
Saman Tabatabaeian ◽  
Rajendra P. Lal ◽  
Wilson Naik

Distributed data storage systems are used to store data reliably over a distributed collection of storage locations, called peers. Coding schemes are used to store a portion of the data in the peers ensuring the complete retrieval of data, during peer failures. This has applications in various areas like Wireless Networks, Sensor Networks etc. In this framework we consider a large file to be stored in a distributed manner over few peers of limited capacity. Each peer stores a portion of the coded data, without the knowledge of the contents of other peers. Random Coding is one of the coding schemes used for this. In [1] coding coefficients are chosen randomly from a finite field to encode the data. The encoding is basically a linear combination of file pieces (pieces are elements of finite fields). The data downloader downloads these coded data from several peers and decodes to get the original data. The decoding is basically solving a system of linear equations over a finite field, which is the most time consuming step in the whole process. We give a simple C++ implementation of the schemes in [1] and plot the results. We are trying to find a scheme where coding vectors can be chosen such that the decoding complexity is reduced significantly. Also in a dynamic setting where nodes enter and leave system intermittently, are discussed.


Computers ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 23
Author(s):  
Laskhmi Siva Rama Krishna Talluri ◽  
Ragunathan Thirumalaisamy ◽  
Ramgopal Kota ◽  
Ram Prasad Reddy Sadi ◽  
Ujjwal KC ◽  
...  

In cloud storage systems, users must be able to shut down the application when not in use and restart it from the last consistent state when required. BlobSeer is a data storage application, specially designed for distributed systems, that was built as an alternative solution for the existing popular open-source storage system-Hadoop Distributed File System (HDFS). In a cloud model, all the components need to stop and restart from a consistent state when the user requires it. One of the limitations of BlobSeer DFS is the possibility of data loss when the system restarts. As such, it is important to provide a consistent start and stop state to BlobSeer components when used in a Cloud environment to prevent any data loss. In this paper, we investigate the possibility of BlobSeer providing a consistent state distributed data storage system with the integration of checkpointing restart functionality. To demonstrate the availability of a consistent state, we set up a cluster with multiple machines and deploy BlobSeer entities with checkpointing functionality on various machines. We consider uncoordinated checkpoint algorithms for their associated benefits over other alternatives while integrating the functionality to various BlobSeer components such as the Version Manager (VM) and the Data Provider. The experimental results show that with the integration of the checkpointing functionality, a consistent state can be ensured for a distributed storage system even when the system restarts, preventing any possible data loss after the system has encountered various system errors and failures.


Author(s):  
D. V. Gribanov

Introduction. This article is devoted to legal regulation of digital assets turnover, utilization possibilities of distributed computing and distributed data storage systems in activities of public authorities and entities of public control. The author notes that some national and foreign scientists who study a “blockchain” technology (distributed computing and distributed data storage systems) emphasize its usefulness in different activities. Data validation procedure of digital transactions, legal regulation of creation, issuance and turnover of digital assets need further attention.Materials and methods. The research is based on common scientific (analysis, analogy, comparing) and particular methods of cognition of legal phenomena and processes (a method of interpretation of legal rules, a technical legal method, a formal legal method and a formal logical one).Results of the study. The author conducted an analysis which resulted in finding some advantages of the use of the “blockchain” technology in the sphere of public control which are as follows: a particular validation system; data that once were entered in the system of distributed data storage cannot be erased or forged; absolute transparency of succession of actions while exercising governing powers; automatic repeat of recurring actions. The need of fivefold validation of exercising governing powers is substantiated. The author stresses that the fivefold validation shall ensure complex control over exercising of powers by the civil society, the entities of public control and the Russian Federation as a federal state holding sovereignty over its territory. The author has also conducted a brief analysis of judicial decisions concerning digital transactions.Discussion and conclusion. The use of the distributed data storage system makes it easier to exercise control due to the decrease of risks of forge, replacement or termination of data. The author suggests defining digital transaction not only as some actions with digital assets, but also as actions toward modification and addition of information about legal facts with a purpose of its establishment in the systems of distributed data storage. The author suggests using the systems of distributed data storage for independent validation of information about activities of the bodies of state authority. In the author’s opinion, application of the “blockchain” technology may result not only in the increase of efficiency of public control, but also in the creation of a new form of public control – automatic control. It is concluded there is no legislation basis for regulation of legal relations concerning distributed data storage today.


Sign in / Sign up

Export Citation Format

Share Document