scholarly journals DSFTL: An Efficient FTL for Flash Memory Based Storage Systems

Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 145 ◽  
Author(s):  
Suk-Joo Chae ◽  
Ronnie Mativenga ◽  
Joon-Young Paik ◽  
Muhammad Attique ◽  
Tae-Sun Chung

Flash memory is widely used in solid state drives (SSD), smartphones and so on because of their non-volatility, low power consumption, rapid access speed, and resistance to shocks. Due to the hardware features of flash memory that differ from hard disk drives (HDD), a software called FTL (Flash Translation Layer) was presented. The function of FTL is to make flash memory device appear as a block device to its host. However, due to the erase before write features of flash memory, flash blocks need to be constantly availed through the garbage collection (GC) of invalid pages, which incurs high-priced overhead. In the previous hybrid mapping schemes, there are three problems that cause GC overhead. First, operation of partial merge causes more page copies than operation of switch merge. However, many authors just concentrate on reducing operation of full merge. Second, the availability between a data block and a log block makes the space availability of the log block lower, and it also generates a very high-priced operation of full merge. Third, the space availability of the data block is low because the data block, which has many free pages, is merged. Therefore, we propose a new FTL named DSFTL (Dynamic Setting for FTL). In this FTL, we use many SW (sequential write) log blocks to increase operation of switch merge and to decrease operation of partial merge. In addition, DSFTL dynamically handles the data blocks and log blocks to reduce the operations of erase and the high-priced operation of full merge. Additionally, our scheme prevents the data block with many free pages from being merged to increase the space availability of the data block. Our extensive experimental results prove that our proposed approach (DSFTL) reduces the count of erase and increases the operation of switch merge. As a result, DSFTL decreases the garbage collection overhead.


Solid state drives (SSDs)have emerged as faster and more reliable data storages over the last few years. Their intrinsic characteristics prove them to be more efficient as compared to other traditional storage media such as the Hard Disk Drives (HDDs). Issues such as write amplification, however, degrade the performance and lifespan of an SSD. This issue is in turn handled by the Garbage Collection (GC) algorithms that are put in place to supply free blocks for serving the writes being made to the flash-based SSDs and thus reduce the need of extra unnecessary writes. The LRU/FIFO, Greedy, Windowed Greedy and D choices algorithms have been described to lower write amplification for incoming writes which are different in nature. The performance of the GC algorithms varies based on factors such as pre-defined hot/cold data separation, hotness of data, uniform/non-uniform nature of incoming writes, the GC window size and the number of pages in each block of the flash memory package. Finally, it can be seen that the number of write frontiers so used, can dictate the separation of hot/cold data and increase the performance of a GC algorithm.



Computing ◽  
2011 ◽  
Vol 94 (1) ◽  
pp. 21-68 ◽  
Author(s):  
Nils Fisher ◽  
Zhen He ◽  
Mitzi McCarthy


2021 ◽  
Vol 11 (14) ◽  
pp. 6623
Author(s):  
Chi-Hsiu Su ◽  
Chin-Hsien Wu

Compared with the traditional hard-disk drives (HDDs), solid-state drives (SSDs) have adopted NAND flash memory and become the current popular storage devices. However, when the free space in NAND flash memory is not enough, the garbage collection will be triggered to recycle the free space. The activities of the garbage collection include a large amount of data written and time-consuming erase operations that can reduce the performance of NAND flash memory. Therefore, DRAM is usually added to NAND flash memory as cache to store frequently used data. The typical cache methods mainly utilize the data characteristics of temporal locality and spatial locality to keep the frequently used data in the cache as much as possible. In addition, we find that there are not only temporal/spatial locality, but also certain associations between the accessed data. Therefore, we suggest that a cache policy should not only consider the temporal/spatial locality but also consider the association relationship between the accessed data to improve the cache hit ratio. In the paper, we will propose a cache policy based on request association analysis for reliable NAND-based storage systems. According to the experimental results, the cache hit ratio of the proposed method can be increased significantly when compared with the typical cache methods.



Author(s):  
Shiying Zhou ◽  
Minghui Zheng ◽  
Xu Chen ◽  
Masayoshi Tomizuka

Nowadays, despite the emerging adoption of the solid state drives, hard disk drives (HDDs) are still used extensively as cost-effective and reliable solutions for data storage. In addition to its usual application in desktops and laptops, HDDs become the primary storage medium for the data centers. The track following control task in HDD requires that the read/write head be positioned over the data track center at nano-scale accuracy, which requires an error tolerance about 7nm. Therefore, the dual-stage HDD is introduced to enhance the HDD control performance for extended bandwidth and improved disturbance rejection.



Author(s):  
Jun Hirota ◽  
Ken Hoshino ◽  
Tsukasa Nakai ◽  
Kohei Yamasue ◽  
Yasuo Cho

Abstract In this paper, the authors report their successful attempt to acquire the scanning nonlinear dielectric microscopy (SNDM) signals around the floating gate and channel structures of the 3D Flash memory device, utilizing the custom-built SNDM tool with a super-sharp diamond tip. The report includes details of the SNDM measurement and process involved in sample preparation. With the super-sharp diamond tips with radius of less than 5 nm to achieve the supreme spatial resolution, the authors successfully obtained the SNDM signals of floating gate in high contrast to the background in the selected areas. They deduced the minimum spatial resolution and seized a clear evidence that the diffusion length differences of the n-type impurity among the channels are less than 21 nm. Thus, they concluded that SNDM is one of the most powerful analytical techniques to evaluate the carrier distribution in the superfine three dimensionally structured memory devices.



Coatings ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 729
Author(s):  
Chanida Puttichaem ◽  
Guilherme P. Souza ◽  
Kurt C. Ruthe ◽  
Kittipong Chainok

A novel, high throughput method to characterize the chemistry of ultra-thin diamond-like carbon films is discussed. The method uses surface sensitive SEM/EDX to provide substrate-specific, semi-quantitative silicon nitride/DLC stack composition of protective films extensively used in the hard disk drives industry and at Angstrom-level. SEM/EDX output is correlated to TEM to provide direct, gauge-capable film thickness information using multiple regression models that make predictions based on film constituents. The best model uses the N/Si ratio in the films, instead of separate Si and N contributions. Topography of substrate/film after undergoing wear is correlatively and compositionally described based on chemical changes detected via the SEM/EDX method without the need for tedious cross-sectional workflows. Wear track regions of the substrate have a film depleted of carbon, as well as Si and N in the most severe cases, also revealing iron oxide formation. Analysis of film composition variations around industry-level thicknesses reveals a complex interplay between oxygen, silicon and nitrogen, which has been reflected mathematically in the regression models, as well as used to provide valuable insights into the as-deposited physics of the film.



Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 847
Author(s):  
Sopanhapich Chum ◽  
Heekwon Park ◽  
Jongmoo Choi

This paper proposes a new resource management scheme that supports SLA (Service-Level Agreement) in a bigdata distributed storage system. Basically, it makes use of two mapping modes, isolated mode and shared mode, in an adaptive manner. In specific, to ensure different QoS (Quality of Service) requirements among clients, it isolates storage devices so that urgent clients are not interfered by normal clients. When there is no urgent client, it switches to the shared mode so that normal clients can access all storage devices, thus achieving full performance. To provide this adaptability effectively, it devises two techniques, called logical cluster and normal inclusion. In addition, this paper explores how to exploit heterogeneous storage devices, HDDs (Hard Disk Drives) and SSDs (Solid State Drives), to support SLA. It examines two use cases and observes that separating data and metadata into different devices gives a positive impact on the performance per cost ratio. Real implementation-based evaluation results show that this proposal can satisfy the requirements of diverse clients and can provide better performance compared with a fixed mapping-based scheme.



2020 ◽  
Vol 248 ◽  
pp. 119216
Author(s):  
Laura Talens Peiró ◽  
Alejandra Castro Girón ◽  
Xavier Gabarrell i Durany


Sign in / Sign up

Export Citation Format

Share Document