scholarly journals An Intelligent Caching and Replacement Strategy Based on Cache Profit Model for Space-Ground Integrated Network

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Li Yang ◽  
Cheng Chi ◽  
Chengsheng Pan ◽  
Yaowen Qi

Compared with the stable states of the ground networks, the space-ground integrated networks (SGIN) have limited resources, high transmission delay, and vulnerable topology, which make traditional caching strategies unable to adapt to the complex space network environment. An intelligent and efficient caching strategy is required to improve the edge service capabilities of satellites. Therefore, we investigate these problems in this paper and make the following contributions. First, the content value evaluation model based on classification and regression tree is proposed to solve the problem of “what to cache” by describing the cache value of content, which considers the multidimensional content characteristics. Second, we propose a cache decision strategy based on the node caching cost model to answer “where to cache.” This strategy modified the genetic algorithm to adapt the 0-1 knapsack problem under SDN architecture, which greatly improved the cache hit rate and the network service quality. Finally, we propose a cache replacement strategy by establishing an effective service time model between the satellite and ground transmission link, which solves the problem of “when to replace.” Numerical results demonstrate that the proposed strategy in SGIN can improve the nodes’ cache hit rate and reduce the network transmission delay and transmission hops.

Author(s):  
R. Li ◽  
X. Wang ◽  
X. Shi

Cache replacement strategy is the core for a distributed high-speed caching system, and effects the cache hit rate and utilization of a limited cache space directly. Many reports show that there are temporal and spatial local changes in access patterns of geospatial data, and there are popular hot spots which change over time. Therefore, the key issue for cache replacement strategy for geospatial data is to get a combination method which considers both temporal local changes and spatial local changes in access patterns, and balance the relationship between the changes. And the cache replacement strategy should fit the distribution and changes of hotspot. This paper proposes a cache replacement strategy based on access pattern which have access spatiotemporal localities. Firstly, the strategy builds a method to express the access frequency and the time interval for geospatial data access based on a least-recently-used replacement (LRU) algorithm and its data structure; secondly, considering both the spatial correlation between geospatial data access and the caching location for geospatial data, it builds access sequences based on a LRU stack, which reflect the spatiotemporal locality changes in access pattern. Finally, for achieving the aim of balancing the temporal locality and spatial locality changes in access patterns, the strategy chooses the replacement objects based on the length of access sequences and the cost of caching resource consumption. Experimental results reveal that the proposed cache replacement strategy is able to improve the cache hit rate while achieving a good response performance and higher system throughput. Therefore, it can be applied to handle the intensity of networked GISs data access requests in a cloud-based environment.


2021 ◽  
Vol 17 (2) ◽  
pp. 1-45
Author(s):  
Cheng Pan ◽  
Xiaolin Wang ◽  
Yingwei Luo ◽  
Zhenlin Wang

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.


2020 ◽  
Vol 23 (4) ◽  
pp. 3309-3333 ◽  
Author(s):  
Fatma M. Talaat ◽  
Shereen H. Ali ◽  
Ahmed I. Saleh ◽  
Hesham A. Ali

Sign in / Sign up

Export Citation Format

Share Document