scholarly journals HS-RAID2: Optimizing Small Write Performance in HS-RAID

2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Yongfeng Dong ◽  
Jingyu Liu ◽  
Jie Yan ◽  
Hongpu Liu ◽  
Youxi Wu

HS-RAID (Hybrid Semi-RAID), a power-aware RAID, saves energy by grouping disks in the array. All of the write operations in HS-RAID are small write which degrade the storage system’s performance severely. In this paper, we propose a redundancy algorithm, data incremental parity algorithm (DIP), which employs HS-RAID to minimize the write penalty and improves the performance and reliability of the storage systems. The experimental results show that HS-RAID2(HS-RAID with DIP) is faster and has higher reliability than HS-RAID remarkably.

2021 ◽  
Vol 17 (3) ◽  
pp. 1-24
Author(s):  
Duwon Hong ◽  
Keonsoo Ha ◽  
Minseok Ko ◽  
Myoungjun Chun ◽  
Yoona Kim ◽  
...  

A recent ultra-large SSD (e.g., a 32-TB SSD) provides many benefits in building cost-efficient enterprise storage systems. Owing to its large capacity, however, when such SSDs fail in a RAID storage system, a long rebuild overhead is inevitable for RAID reconstruction that requires a huge amount of data copies among SSDs. Motivated by modern SSD failure characteristics, we propose a new recovery scheme, called reparo , for a RAID storage system with ultra-large SSDs. Unlike existing RAID recovery schemes, reparo repairs a failed SSD at the NAND die granularity without replacing it with a new SSD, thus avoiding most of the inter-SSD data copies during a RAID recovery step. When a NAND die of an SSD fails, reparo exploits a multi-core processor of the SSD controller in identifying failed LBAs from the failed NAND die and recovering data from the failed LBAs. Furthermore, reparo ensures no negative post-recovery impact on the performance and lifetime of the repaired SSD. Experimental results using 32-TB enterprise SSDs show that reparo can recover from a NAND die failure about 57 times faster than the existing rebuild method while little degradation on the SSD performance and lifetime is observed after recovery.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 358 ◽  
Author(s):  
Sangmin Suh

This note presents an estimation error based disturbance observer (EEDOB) to reduce the effects of external disturbances. In the proposed control structure, a difference between an estimator output and a plant output is considered as an equivalent disturbance. Therefore, when a disturbance appears, the proposed disturbance observer (DOB) is activated. Unlike conventional DOB, this method does not require the plant inverse model or additional stabilizing filters. In addition, the proposed method always satisfies closed loop systems stability, which is definitely different from conventional DOB. To verify the effectiveness, this method was applied to commercial storage systems. From the experimental results, it is confirmed that tracking performance is improved by 23.5%.


2021 ◽  
Vol 2 (2) ◽  
pp. 1-16
Author(s):  
Ru Yang ◽  
Yuhui Deng ◽  
Yi Zhou ◽  
Ping Huang

Restoring data is the main purpose of data backup in storage systems. The fragmentation issue, caused by physically scattering logically continuous data across a variety of disk locations, poses a negative impact on the restoring performance of a deduplication system. Rewriting algorithms are used to alleviate the fragmentation problem by improving the restoring speed of a deduplication system. However, rewriting methods give birth to a big sacrifice in terms of deduplication ratio, leading to a huge storage space waste. Furthermore, traditional backup approaches treat file metadata and chunk metadata as the same, which causes frequent on-disk metadata accesses. In this article, we start by analyzing storage characteristics of backup metadata. An intriguing finding shows that with 10 million files, the file metadata merely takes up approximately 340 MB. Motivated by this finding, we propose a Classified-Metadata based Restoring method (CMR) that classifies backup metadata into file metadata and chunk metadata . Because the file metadata merely takes up a meager amount of space, CMR maintains all file metadata in memory, whereas chunk metadata are aggressively prefetched to memory in a greedy manner. A deduplication system with CMR in place exhibits three salient features: (i) It avoids rewriting algorithms’ additional overhead by reducing the number of disk reads in a restoring process, (ii) it increases the restoring throughput without sacrificing the deduplication ratio, and (iii) it thoroughly leverages the hardware resources to boost the restoring performance. To quantitatively evaluate the performance of CMR, we compare our CMR against two state-of-the-art approaches, namely, a history-aware rewriting method (HAR) and a context-based rewriting scheme (CAP). The experimental results show that compared to HAR and CAP, CMR reduces the restoring time by 27.2% and 29.3%, respectively. Moreover, the deduplication ratio is improved by 1.91% and 4.36%, respectively.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2041
Author(s):  
Liyan Zhu ◽  
Chuqiao Xiao ◽  
Xueqing Gong

The emerging decentralized storage systems (DSSs), such as InterPlanetary File System (IPFS), Storj, and Sia, provide people with a new storage model. Instead of being centrally managed, the data are sliced up and distributed across the nodes of the network. Furthermore, each data object is uniquely identified by a cryptographic hash (ObjectId) and can only be retrieved by ObjectId. Compared with the search functions provided by the existing centralized storage systems, the application scenarios of the DSSs are subject to certain restrictions. In this paper, we first apply decentralized B+Tree and HashMap to the DSSs to provide keyword search. Both indexes are kept in blocks. Since these blocks may be scattered on multiple nodes, we ensure that all operations involve as few blocks as possible to reduce network cost and response time. In addition, the version control and version merging algorithms are designed to effectively organize the indexes and facilitate data integration. The experimental results prove that our indexes have excellent availability and scalability.


2003 ◽  
Author(s):  
Edwin P. Walker ◽  
Yi Zhang ◽  
Alexandr Dvornik ◽  
Peter Rentzepis ◽  
Sadik Esener

Author(s):  
HooYoung Ahn ◽  
Junsu Kim ◽  
YoonJoon Lee

Devices in IoE (Internet of Everything) environment generate massive data from various sensors. To store and process the rapidly incoming large-scale data, SSDs are used for improving performance and reliability of storage systems. However, they have typical problem called write amplification which is caused by out-of-place updates characteristics. As the write amplification increases, it degrades I/O performance and shortens SSDs' lifetime. This paper presents a new approach to reduce write amplification of SSD arrays. To solve the problem, this paper proposes a new parity update scheme, called LPUS. LPUS transforms random parity updates to sequential writes with additional log blocks in SSD arrays by using parity logs and lazy parity updates. The experimental results show that, LPUS reduces write amplification up to 37% and the number of erases up to 50% with the reasonable size of log space.


1988 ◽  
Vol 102 ◽  
pp. 357-360
Author(s):  
J.C. Gauthier ◽  
J.P. Geindre ◽  
P. Monier ◽  
C. Chenais-Popovics ◽  
N. Tragin ◽  
...  

AbstractIn order to achieve a nickel-like X ray laser scheme we need a tool to determine the parameters which characterise the high-Z plasma. The aim of this work is to study gold laser plasmas and to compare experimental results to a collisional-radiative model which describes nickel-like ions. The electronic temperature and density are measured by the emission of an aluminium tracer. They are compared to the predictions of the nickel-like model for pure gold. The results show that the density and temperature can be estimated in a pure gold plasma.


Sign in / Sign up

Export Citation Format

Share Document