scholarly journals CRIM: Conditional Remapping to Improve the Reliability of Solid-State Drives with Minimizing Lifetime Loss

2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Youngpil Kim ◽  
Hyunchan Park ◽  
Cheol-Ho Hong ◽  
Chuck Yoo

Solid-state drive (SSD) becomes popular as the main storage device. However, over time, the reliability of SSD degrades due to bit errors, which poses a serious issue. The periodic remapping (PR) has been suggested to overcome the issue, but it still has a critical weakness as PR increases lifetime loss. Therefore, we propose the conditional remapping invocation method (CRIM) to sustain reliability without lifetime loss. CRIM uses a probability-based threshold to determine the condition of invoking remapping operation. We evaluate the effectiveness of CRIM using the real workload trace data. In our experiments, we show that CRIM can extend a lifetime of SSD more than PR by up to 12.6% to 17.9% of 5-year warranty time. In addition, we show that CRIM can reduce the bit error probability of SSD by up to 73 times in terms of typical bit error rate in comparison with PR.

2011 ◽  
Vol 58 (1) ◽  
pp. 2-10 ◽  
Author(s):  
Shuhei Tanakamaru ◽  
Mayumi Fukuda ◽  
Kazuhide Higuchi ◽  
Atsushi Esumi ◽  
Mitsuyoshi Ito ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4412
Author(s):  
Claudio Ferreira Dias ◽  
Eduardo Rodrigues de Lima ◽  
Gustavo Fraidenraich

We derive exact closed-form expressions for Long Range (LoRa) bit error probability and diversity order for channels subject to Nakagami-m, Rayleigh and Rician fading. Analytical expressions are compared with numerical results, showing the accuracy of our proposed exact expressions. In the limiting case of the Nakagami and Rice parameters, our bit error probability expressions specialize into the non-fading case.


2012 ◽  
pp. 13-19
Author(s):  
Riaz Ahmad Qamar ◽  
Mohd Aizaini Maarof ◽  
Subariah Ibrahim

A quantum key distribution protocol(QKD), known as BB84, was developed in 1984 by Charles Bennett and Gilles Brassard. The protocol works in two phases which are quantum state transmission and conventional post processing. In the first phase of BB84, raw key elements are distributed between two legitimate users by sending encoded photons through quantum channel whilst in the second phase, a common secret-key is obtained from correlated raw key elements by exchanging messages through a public channel e.g.; network or internet. The secret-key so obtained is used for cryptography purpose. Reconciliation is a compulsory part of post processing and hence of quantum key distribution protocol. The performance of a reconciliation protocol depends on the generation rate of common secret-key, number of bits disclosed and the error probability in common secrete-key. These characteristics of a protocol can be achieved by using a less interactive reconciliation protocol which can handle a higher initial quantum bit error rate (QBER). In this paper, we use a simple Bose, Chaudhuri, Hocquenghem (BCH) error correction algorithm with simplified syndrome table to achieve an efficient reconciliation protocol which can handle a higher quantum bit error rate and outputs a common key with zero error probability. The proposed protocol efficient in removing errors such that it can remove all errors even if QBER is 60%. Assuming the post processing channel is an authenticated binary symmetric channel (BSC).


2020 ◽  
Vol 10 (12) ◽  
pp. 4341
Author(s):  
Kyusik Kim ◽  
Seongmin Kim ◽  
Taeseok Kim

Differentiated I/O services for applications with their own requirements are very important for user satisfaction. Nonvolatile memory express (NVMe) solid-state drive (SSD) architecture can improve the I/O bandwidth with its numerous submission queues, but the quality of service (QoS) of each I/O request is never guaranteed. In particular, if many I/O requests are pending in the submission queues due to a bursty I/O workload, urgent I/O requests can be delayed, and consequently, the QoS requirements of applications that need fast service cannot be met. This paper presents a scheme that handles urgent I/O requests without delay even if there are many pending I/O requests. Since the pending I/O requests in the submission queues cannot be controlled by the host, the host memory buffer (HMB), which is part of the DRAM of the host that can be accessed from the controller, is used to process urgent I/O requests. Instead of sending urgent I/O requests into the SSDs through legacy I/O paths, the latency is removed by directly inserting them into the HMB. Emulator experiments demonstrated that the proposed scheme could reduce the average and tail latencies by up to 99% and 86%, respectively.


2018 ◽  
Vol 7 (4.7) ◽  
pp. 204 ◽  
Author(s):  
Iskandar N. Nasyrov ◽  
Ildar I. Nasyrov ◽  
Rustam I. Nasyrov ◽  
Bulat A. Khairullin

The data ambiguity problem for heterogeneous sets of equipment reliability indicators is considered. In fact, the same manufacturers do not always unambiguously fill the SMART parameters with the corresponding values for their different models of hard disk drives. In addition, some of the parameters are sometimes empty, while the other parameters have only zero values.The scientific task of the research consists in the need to define such a set of parameters that will allow us to obtain a comparative assessment of the reliability of each individual storage device of any model of any manufacturer for its timely replacement.The following conditions were used to select the parameters suitable for evaluating their relative values:1) The parameter values for normally operating drives should always be greater or lower than for the failed ones;2) The monotonicity of changes in the values of parameters in the series should be observed: normally working, withdrawn prematurely, failed;3) The first two conditions must be fulfilled both in general and in particular, for example, for the drives of each brand separately.Separate averaging of the values for normally operating, early decommissioned and failed storage media was performed. The maximum of these three values was taken as 100%. The relative distribution of values for each parameter was studied.Five parameters were selected (5 – “Reallocated sectors count”, 7 – “Seek error rate”, 184 – “End-to-end error”, 196 – “Reallocation event count”, 197 – “Current pending sector count”, plus another four (1 – “Raw read error rate”, 10 – “Spin-up retry counts”, 187 – “Reported uncorrectable errors”, 198 – “Uncorrectable sector counts”), which require more careful analysis, and one (194 – “Hard disk assembly temperature”) for prospective use in solid-state drives, as a result of the relative value study of their suitability for use upon evaluating the reliability of data storage devices. 


Author(s):  
Dwight A. Haworth

This paper discusses the history of the sort-merge routine and the impacts of hardware limitations on the performance of sort-merge processing.  The results of comparing a single-step sort-merge with a two-step sort-merge in a hard-disk drive (HDD) environment are presented to show that a two-step sort-merge can reduce total processing time. An evaluation is made of the total transfer time of three sort-merge variations without reference to seek time or rotational delay.  This evaluation prepares the statistics for application to the solid-state drive (SSD) environment, and the conclusion is that sort-merge routines that are optimized for the HDD environment are sub-optimal if applied to the SSD environment.  In addition, the sizes of the work files used by the three sort-merge routines are analyzed, and it is demonstrated that sort-merge routines that are optimized for the HDD environment will generate unnecessary wear if applied to the SDD environment.  Further, it is demonstrated that the key sorting routine should be preferred over the other sort-merge routines in a SSD environment.


Sign in / Sign up

Export Citation Format

Share Document