scholarly journals An Efficient Cache Management Scheme of Flash Translation Layer for Large Size Flash Memory Drives

2015 ◽  
Vol 20 (11) ◽  
pp. 31-38
Author(s):  
Hwan-Pil Choi ◽  
Yong-Seok Kim
2014 ◽  
Vol 651-653 ◽  
pp. 1000-1003
Author(s):  
Yin Yang ◽  
Wen Yi Li ◽  
Kai Wang

In this paper, we propose a novel and efficient flash translation layer scheme called BLTF: Block Link-Table FTL. In this proposed scheme, all blocks can be used for servicing update requests, so updates operation can be performed on any of the physical blocks, through uniting log blocks and physical blocks, it can avoid uneven erasing and low block utilization. The invalid blocks, in BLTF scheme, could be reclaimed properly and intensively, it can avoid merging log blocks with physical blocks. At last, the BLTF is tested by simulation, which demonstrates the BLTF can effectively solve data storage problems. Through comparison with other algorithms, we can know that the proposed BLTF greatly prolongs service life of flash devices and improves efficiency of blocks erasing operation.


2014 ◽  
Vol 22 (12) ◽  
pp. 2779-2792 ◽  
Author(s):  
Liang Shi ◽  
Jianhua Li ◽  
Qingan Li ◽  
Chun Jason Xue ◽  
Chengmo Yang ◽  
...  

2021 ◽  
Vol 17 (2) ◽  
pp. 1-45
Author(s):  
Cheng Pan ◽  
Xiaolin Wang ◽  
Yingwei Luo ◽  
Zhenlin Wang

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.


2013 ◽  
Vol 464 ◽  
pp. 365-368 ◽  
Author(s):  
Ji Jun Hung ◽  
Kai Bu ◽  
Zhao Lin Sun ◽  
Jie Tao Diao ◽  
Jian Bin Liu

This paper presents a new architecture SSD based on NVMe (Non-Volatile Memory express) protocol. The NVMe SSD promises to solve the conventional SATA and SAS interface bottleneck. Its aimed to present a PCIe NAND Flash memory card that uses NAND Flash memory chip as the storage media. The paper analyzes the PCIe protocol and the characteristics of SSD controller, and then gives the detailed design of the PCIe SSD. It mainly contains the PCIe port and Flash Translation Layer.


2010 ◽  
Vol 10 (1) ◽  
pp. 68-77
Author(s):  
Hak-Chul Kim ◽  
Yong-Hun Park ◽  
Jong-Hyeong Yun ◽  
Dong-Min Seo ◽  
Suk-Il Song ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document