Efficient Cache Management Protocol Based on Data Locality in Mobile DBMSs

Author(s):  
IlYoung Chung ◽  
JeHyok Ryu ◽  
Chong -Sun Hwang
2021 ◽  
Vol 17 (2) ◽  
pp. 1-45
Author(s):  
Cheng Pan ◽  
Xiaolin Wang ◽  
Yingwei Luo ◽  
Zhenlin Wang

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.


Micromachines ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1262
Author(s):  
Juan Fang ◽  
Zelin Wei ◽  
Huijing Yang

GPGPUs has gradually become a mainstream acceleration component in high-performance computing. The long latency of memory operations is the bottleneck of GPU performance. In the GPU, multiple threads are divided into one warp for scheduling and execution. The L1 data caches have little capacity, while multiple warps share one small cache. That makes the cache suffer a large amount of cache contention and pipeline stall. We propose Locality-Based Cache Management (LCM), combined with the Locality-Based Warp Scheduling (LWS), to reduce cache contention and improve GPU performance. Each load instruction can be divided into three types according to locality: only used once as streaming data locality, accessed multiple times in the same warp as intra-warp locality, and accessed in different warps as inter-warp data locality. According to the locality of the load instruction, LWS applies cache bypass to the streaming locality request to improve the cache utilization rate, extend inter-warp memory request coalescing to make full use of the inter-warp locality, and combine with the LWS to alleviate cache contention. LCM and LWS can effectively improve cache performance, thereby improving overall GPU performance. Through experimental evaluation, our LCM and LWS can obtain an average performance improvement of 26% over baseline GPU.


2018 ◽  
Vol 24 ◽  
pp. 33
Author(s):  
Chinelo Okigbo ◽  
Fatima Mohiuddin ◽  
Jesus Vargas ◽  
Edward Hamaty

VASA ◽  
2015 ◽  
Vol 44 (5) ◽  
pp. 381-386 ◽  
Author(s):  
Christian Uhl ◽  
Thomas Betz ◽  
Andrea Rupp ◽  
Markus Steinbauer ◽  
Ingolf Töpel

Abstract. Summary: Background: This pilot study was set up to examine the effects of a continuous postoperative wound infusion system with a local anaesthetic on perioperative pain and the consumption of analgesics. Patients and methods: We included 42 patients in this prospective observational pilot study. Patients were divided into two groups. One group was treated in accordance with the WHO standard pain management protocol and in addition to that received a continuous local wound infusion treatment (Group 1). Group 2 was treated with analgesics in accordance with the WHO standard pain management protocol, exclusively. Results: The study demonstrated a significantly reduced postoperative VAS score for stump pain in Group 1 for the first 5 days. Furthermore, the intake of opiates was significantly reduced in Group 1 (day 1, Group 1: 42.1 vs. Group 2: 73.5, p = 0.010; day 2, Group 1: 27.7 vs. Group 2: 52.5, p = 0.012; day 3, Group 1: 23.9 vs. Group 2: 53.5, p = 0.002; day 4, Group 1: 15.7 vs. Group 2: 48.3, p = 0.003; day 5, Group 1 13.3 vs. Group 2: 49.9, p = 0.001). There were no significant differences between the two groups, neither in phantom pain intensity at discharge nor postoperative complications and death. Conclusions: Continuous postoperative wound infusion with a local anaesthetic in combination with a standard pain management protocol can reduce both stump pain and opiate intake in patients who have undergone transfemoral amputation. Phantom pain was not significantly affected.


Sign in / Sign up

Export Citation Format

Share Document