scholarly journals Handling memory cache policy with integer points countings

Author(s):  
Philippe Clauss
Keyword(s):  
4OR ◽  
2020 ◽  
Author(s):  
Michele Conforti ◽  
Marianna De Santis ◽  
Marco Di Summa ◽  
Francesco Rinaldi

AbstractWe consider the integer points in a unimodular cone K ordered by a lexicographic rule defined by a lattice basis. To each integer point x in K we associate a family of inequalities (lex-inequalities) that define the convex hull of the integer points in K that are not lexicographically smaller than x. The family of lex-inequalities contains the Chvátal–Gomory cuts, but does not contain and is not contained in the family of split cuts. This provides a finite cutting plane method to solve the integer program $$\min \{cx: x\in S\cap \mathbb {Z}^n\}$$ min { c x : x ∈ S ∩ Z n } , where $$S\subset \mathbb {R}^n$$ S ⊂ R n is a compact set and $$c\in \mathbb {Z}^n$$ c ∈ Z n . We analyze the number of iterations of our algorithm.


2021 ◽  
Vol 17 (2) ◽  
pp. 1-45
Author(s):  
Cheng Pan ◽  
Xiaolin Wang ◽  
Yingwei Luo ◽  
Zhenlin Wang

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.


2021 ◽  
Author(s):  
Otabek Gulomov ◽  
Sadulla Shodiev
Keyword(s):  

1993 ◽  
Vol 71 (1) ◽  
pp. 143-179 ◽  
Author(s):  
W. Duke ◽  
Z. Rudnick ◽  
P. Sarnak

2021 ◽  
Vol 17 (3) ◽  
pp. 1-35
Author(s):  
Juncheng Yang ◽  
Yao Yue ◽  
K. V. Rashmi

Modern web services use in-memory caching extensively to increase throughput and reduce latency. There have been several workload analyses of production systems that have fueled research in improving the effectiveness of in-memory caching systems. However, the coverage is still sparse considering the wide spectrum of industrial cache use cases. In this work, we significantly further the understanding of real-world cache workloads by collecting production traces from 153 in-memory cache clusters at Twitter, sifting through over 80 TB of data, and sometimes interpreting the workloads in the context of the business logic behind them. We perform a comprehensive analysis to characterize cache workloads based on traffic pattern, time-to-live (TTL), popularity distribution, and size distribution. A fine-grained view of different workloads uncover the diversity of use cases: many are far more write-heavy or more skewed than previously shown and some display unique temporal patterns. We also observe that TTL is an important and sometimes defining parameter of cache working sets. Our simulations show that ideal replacement strategy in production caches can be surprising, for example, FIFO works the best for a large number of workloads.


1968 ◽  
Vol 14 (2) ◽  
pp. 141-152 ◽  
Author(s):  
A. Yudin
Keyword(s):  

2009 ◽  
Vol 138 (1) ◽  
pp. 1-23 ◽  
Author(s):  
M. C. Lettington

Sign in / Sign up

Export Citation Format

Share Document