erasure codes
Recently Published Documents


TOTAL DOCUMENTS

216
(FIVE YEARS 42)

H-INDEX

23
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Atthapan Daramas ◽  
Vimal Kumar
Keyword(s):  

2021 ◽  
Author(s):  
Elad Domanovitz ◽  
Gustavo Kasper Facenda ◽  
Ashish Khisti ◽  
Wai-Tian Tan ◽  
John Apostolopoulos
Keyword(s):  

2021 ◽  
Author(s):  
Zhu Yuan ◽  
Xindong You ◽  
Xueqiang Lv ◽  
Ping Xie

Abstract Thanks to excellent reliability, availability, flexibility and scalability, redundant arrays of independent (or inexpensive) disks (RAID) are widely deployed in large-scale data centers. RAID scaling effectively relieves the storage pressure of the data center and increases both the capacity and I/O parallelism of storage systems. To regain load balancing among all disks including old and new, some data usually are migrated from old disks to new disks. Owing to unique parity layouts of erasure codes, traditional scaling approaches may incur high migration overhead on RAID-6 scaling. This paper proposes an efficient approach based Short-Code for RAID-6 scaling. The approach exhibits three salient features: first, SS6 introduces $\tau $ to determine where new disks should be inserted. Second, SS6 minimizes migration overhead by delineating migration areas. Third, SS6 reduces the XOR calculation cost by optimizing parity update. The numerical results and experiment results demonstrate that (i) SS6 reduces the amount of data migration and improves the scaling performance compared with Round-Robin and Semi-RR under offline, (ii) SS6 decreases the total scaling time against Round-Robin and Semi-RR under two real-world I/O workloads (iii) the user average response time of SS6 is better than the other two approaches during scaling and after scaling.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Peter Michael Schwarz ◽  
Bernd Freisleben

Abstract Background DNA is a promising storage medium for high-density long-term digital data storage. Since DNA synthesis and sequencing are still relatively expensive tasks, the coding methods used to store digital data in DNA should correct errors and avoid unstable or error-prone DNA sequences. Near-optimal rateless erasure codes, also called fountain codes, are particularly interesting codes to realize high-capacity and low-error DNA storage systems, as shown by Erlich and Zielinski in their approach based on the Luby transform (LT) code. Since LT is the most basic fountain code, there is a large untapped potential for improvement in using near-optimal erasure codes for DNA storage. Results We present NOREC4DNA, a software framework to use, test, compare, and improve near-optimal rateless erasure codes (NORECs) for DNA storage systems. These codes can effectively be used to store digital information in DNA and cope with the restrictions of the DNA medium. Additionally, they can adapt to possible variable lengths of DNA strands and have nearly zero overhead. We describe the design and implementation of NOREC4DNA. Furthermore, we present experimental results demonstrating that NOREC4DNA can flexibly be used to evaluate the use of NORECs in DNA storage systems. In particular, we show that NORECs that apparently have not yet been used for DNA storage, such as Raptor and Online codes, can achieve significant improvements over LT codes that were used in previous work. NOREC4DNA is available on https://github.com/umr-ds/NOREC4DNA. Conclusion NOREC4DNA is a flexible and extensible software framework for using, evaluating, and comparing NORECs for DNA storage systems.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Dan Tang ◽  
Hongliang Cai

The erasure codes are widely used in the distributed storage with low redundancy compared to the replication method. However, the current research studies about the erasure codes mainly focus on the encoding methods, while there are few studies on the decoding methods. In this paper, a novel erasure decoding method is proposed; it is a general decoding method and can be used both over the multivariate finite field and the binary finite field. The decoding of the failures can be realized based on the transforming process of the decoding transformation matrix, and it is convenient to avoid the overburdened visiting problem by tiny modification of the method. The correctness of the method is proved by the theoretical analysis; the experiments about the comparison with the traditional methods show that the proposed method has better decoding efficiency and lower reconstruction bandwidth.


2021 ◽  
Author(s):  
Gustavo Kasper Facenda ◽  
Elad Domanovitz ◽  
Ashish Khisti ◽  
Wai-Tian Tan ◽  
John Apostolopoulos

2021 ◽  
Author(s):  
Anan Zhou ◽  
Benshun Yi ◽  
Mian Xiang ◽  
Laigan Luo

Abstract Distributed storage system (DSS) is an emerging paradigm which provides reliable storage services for various source data. As the fault-tolerance scheme for DSS, erasure codes are required to provide redundancy service with high fault-tolerance and low cost. However, the existing coding scheme cannot provide these requirements well. Thus, it becomes an important yet challenging issue to find a code for storing various source data with high fault-tolerance and low cost. In this paper, a novel construction of repairable fountain codes with unequal locality is proposed by combining with partial duplication tech- nique, namely the PD-ULRFC scheme. We construct a multi-tier heterogeneous storage network, where data core, processing units and storage nodes collaboratively store and transmit data. Moreover, the proposed PD-ULRFC scheme can reduce the repair and download cost by sacrificing a little extra storage occupation. Furthermore, the expressions of the repair cost and download cost are derived to analyze the performance of PD-ULRFC scheme. The simulation results demonstrate that the PD-ULRFC scheme significantly outperforms other redundant schemes in communication cost saving.


2021 ◽  
Author(s):  
Ruizhen Wu ◽  
Yan Wu ◽  
Jingjing Chen ◽  
Lin Wang ◽  
Mingming Wang
Keyword(s):  

2021 ◽  
Vol 7 ◽  
pp. e351
Author(s):  
Muhammad Rizwan Ali ◽  
Farooq Ahmad ◽  
Muhammad Hasanain Chaudary ◽  
Zuhaib Ashfaq Khan ◽  
Mohammed A. Alqahtani ◽  
...  

The cloud is a shared pool of systems that provides multiple resources through the Internet, users can access a lot of computing power using their computer. However, with the strong migration rate of multiple applications towards the cloud, more disks and servers are required to store huge data. Most of the cloud storage service providers are replicating full copies of data over multiple data centers to ensure data availability. Further, the replication is not only a costly process but also a wastage of energy resources. Furthermore, erasure codes reduce the storage cost by splitting data in n chunks and storing these chunks into n + k different data centers, to tolerate k failures. Moreover, it also needs extra computation cost to regenerate the data object. Cache-A Replica On Modification (CAROM) is a hybrid file system that gets combined benefits from both the replication and erasure codes to reduce access latency and bandwidth consumption. However, in the literature, no formal analysis of CAROM is available which can validate its performance. To address this issue, this research firstly presents a colored Petri net based formal model of CAROM. The research proceeds by presenting a formal analysis and simulation to validate the performance of the proposed system. This paper contributes towards the utilization of resources in clouds by presenting a comprehensive formal analysis of CAROM.


Sign in / Sign up

Export Citation Format

Share Document