scholarly journals Near optimality of the discrete persistent access caching algorithm

2005 ◽  
Vol DMTCS Proceedings vol. AD,... (Proceedings) ◽  
Author(s):  
Predrag R. Jelenković ◽  
Xiaozhu Kang ◽  
Ana Radovanović

International audience Renewed interest in caching techniques stems from their application to improving the performance of the World Wide Web, where storing popular documents in proxy caches closer to end-users can significantly reduce the document download latency and overall network congestion. Rules used to update the collection of frequently accessed documents inside a cache are referred to as cache replacement algorithms. Due to many different factors that influence the Web performance, the most desirable attributes of a cache replacement scheme are low complexity and high adaptability to variability in Web access patterns. These properties are primarily the reason why most of the practical Web caching algorithms are based on the easily implemented Least-Recently-Used (LRU) cache replacement heuristic. In our recent paperJelenković and Radovanović (2004c), we introduce a new algorithm, termed Persistent Access Caching (PAC), that, in addition to desirable low complexity and adaptability, somewhat surprisingly achieves nearly optimal performance for the independent reference model and generalized Zipf's law request probabilities. Two drawbacks of the PAC algorithm are its dependence on the request arrival times and variable storage requirements. In this paper, we resolve these problems by introducing a discrete version of the PAC policy (DPAC) that, after a cache miss, places the requested document in the cache only if it is requested at least $k$ times among the last $m$, $m \geq k$, requests. However, from a mathematical perspective, due to the inherent coupling of the replacement decisions for different documents, the DPAC algorithm is considerably harder to analyze than the original PAC policy. In this regard, we develop a new analytical technique for estimating the performance of the DPAC rule. Using our analysis, we show that this algorithm is close to optimal even for small values of $k$ and $m$, and, therefore, adds negligible additional storage and processing complexity in comparison to the ordinary LRU policy.

2021 ◽  
Vol 17 (2) ◽  
pp. 1-45
Author(s):  
Cheng Pan ◽  
Xiaolin Wang ◽  
Yingwei Luo ◽  
Zhenlin Wang

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.


Author(s):  
A. V. Vishnekov ◽  
E. M. Ivanova

The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.


Geophysics ◽  
2012 ◽  
Vol 77 (1) ◽  
pp. H9-H18 ◽  
Author(s):  
Giulio Vignoli ◽  
Rita Deiana ◽  
Giorgio Cassiani

The reconstruction of the GPR velocity vertical profile from vertical radar profile (VRP) traveltime data is a problem with a finite number of measurements and imprecise data, analogous to similar seismic techniques, such as the shallow down-hole test used for S-wave velocity profiling or the vertical seismic profiling (VSP) commonly used in deeper exploration. The uncertainty in data accuracy and the error amplification inherent in deriving velocity estimates from gradients of arrival times make this an example of an ill-posed inverse problem. In the framework of Tikhonov regularization theory, ill-posedness can be tackled by introducing a regularizing functional (stabilizer). The role of this functional is to stabilize the numerical solution by incorporating the appropriate a priori assumptions about the geometrical and/or physical properties of the solution. One of these assumptions could be the existence of sharp boundaries separating rocks with different physical properties. We apply a method based on the minimum support stabilizer to the VRP traveltime inverse problem. This stabilizer makes it possible to produce more accurate profiles of geological targets with compact structure. We compare more traditional inversion results with our proposed compact reconstructions. Using synthetic examples, we demonstrate that the minimum support stabilizer allows an improved recovery of the profile shape and velocity values of blocky targets. We also study the stabilizer behavior with respect to different noise levels and different choices of the reference model. The proposed approach is then applied to real cases where VPRs have been used to derive moisture content profiles as a function of depth. In these real cases, the derived sharper profiles are consistent with other evidence, such as GPR zero-offset profiles, GPR reflections and known locations of the water table.


2021 ◽  
Author(s):  
Muhammad Jaseemuddin

In this thesis, we proposed a Cluster-Based Cache Replacement (CBR) scheme for 5G Networks to reduce the backhaul traffic. We developed our scheme based on the understanding of the degradation of the performance of the cache placement algorithm. We expect that whenever file request pattern differs from the file popularity distribution, such as unpopular files become more popular or vice versa, the caching system should experience performance degradation. We address this problem by presenting a cache replacement scheme based on the idea of Least Frequency Used (LFU) replacement policy, but we consider only the recent request to avoid cache pollution. We evaluated the performance of CBR through simulation and compared its performance with LRU that is widely used as a cache replacement technique in practice. We simulated three different configurations of LRU scheme in a cluster-based mobile network model. Our simulation results show that the CBR outperforms LRU, where it reduces the miss ratio from 86% to 76% and the backhaul traffic from 3.67×105 to 3.47×105 MB with 10% of cache size. This superior performance it achieves by fewer replacement decisions and storing more files in the cache.


Sign in / Sign up

Export Citation Format

Share Document