A Cache Replacement Method for Crowded Streaming Cache Servers Responding to Rapidly Changing Access Patterns

Author(s):  
T. Osuga ◽  
T. Asakura ◽  
K. Taniguchi
Author(s):  
R. Li ◽  
X. Wang ◽  
X. Shi

Cache replacement strategy is the core for a distributed high-speed caching system, and effects the cache hit rate and utilization of a limited cache space directly. Many reports show that there are temporal and spatial local changes in access patterns of geospatial data, and there are popular hot spots which change over time. Therefore, the key issue for cache replacement strategy for geospatial data is to get a combination method which considers both temporal local changes and spatial local changes in access patterns, and balance the relationship between the changes. And the cache replacement strategy should fit the distribution and changes of hotspot. This paper proposes a cache replacement strategy based on access pattern which have access spatiotemporal localities. Firstly, the strategy builds a method to express the access frequency and the time interval for geospatial data access based on a least-recently-used replacement (LRU) algorithm and its data structure; secondly, considering both the spatial correlation between geospatial data access and the caching location for geospatial data, it builds access sequences based on a LRU stack, which reflect the spatiotemporal locality changes in access pattern. Finally, for achieving the aim of balancing the temporal locality and spatial locality changes in access patterns, the strategy chooses the replacement objects based on the length of access sequences and the cost of caching resource consumption. Experimental results reveal that the proposed cache replacement strategy is able to improve the cache hit rate while achieving a good response performance and higher system throughput. Therefore, it can be applied to handle the intensity of networked GISs data access requests in a cloud-based environment.


2015 ◽  
Vol 18 (4) ◽  
pp. 171-182 ◽  
Author(s):  
Rui Li ◽  
Jiapei Fan ◽  
Xinxing Wang ◽  
Zhen Zhou ◽  
Huayi Wu

2014 ◽  
Vol 1049-1050 ◽  
pp. 1824-1827
Author(s):  
Zhuo Tian ◽  
Bai Cheng Li

At present the majority of streaming media files is large and it requires a lot of network bandwidth and disk bandwidth. We propose an adaptive segment-based method, the cache replacement method and the multi technologies combined optimized transmission policy. Simulation results indicate that they are highly efficient methods for use of caching proxy server resources, reducing the startup latency and save of bandwidth of backbone network.


2005 ◽  
Vol DMTCS Proceedings vol. AD,... (Proceedings) ◽  
Author(s):  
Predrag R. Jelenković ◽  
Xiaozhu Kang ◽  
Ana Radovanović

International audience Renewed interest in caching techniques stems from their application to improving the performance of the World Wide Web, where storing popular documents in proxy caches closer to end-users can significantly reduce the document download latency and overall network congestion. Rules used to update the collection of frequently accessed documents inside a cache are referred to as cache replacement algorithms. Due to many different factors that influence the Web performance, the most desirable attributes of a cache replacement scheme are low complexity and high adaptability to variability in Web access patterns. These properties are primarily the reason why most of the practical Web caching algorithms are based on the easily implemented Least-Recently-Used (LRU) cache replacement heuristic. In our recent paperJelenković and Radovanović (2004c), we introduce a new algorithm, termed Persistent Access Caching (PAC), that, in addition to desirable low complexity and adaptability, somewhat surprisingly achieves nearly optimal performance for the independent reference model and generalized Zipf's law request probabilities. Two drawbacks of the PAC algorithm are its dependence on the request arrival times and variable storage requirements. In this paper, we resolve these problems by introducing a discrete version of the PAC policy (DPAC) that, after a cache miss, places the requested document in the cache only if it is requested at least $k$ times among the last $m$, $m \geq k$, requests. However, from a mathematical perspective, due to the inherent coupling of the replacement decisions for different documents, the DPAC algorithm is considerably harder to analyze than the original PAC policy. In this regard, we develop a new analytical technique for estimating the performance of the DPAC rule. Using our analysis, we show that this algorithm is close to optimal even for small values of $k$ and $m$, and, therefore, adds negligible additional storage and processing complexity in comparison to the ordinary LRU policy.


Author(s):  
Pratheeksha P ◽  
◽  
Revathi S A ◽  

Despite extensive developments in improving cache hit rates, designing an optimal cache replacement policy that mimics Belady’s algorithm still remains a challenging task. Existing standard static replacement policies does not adapt to the dynamic nature of memory access patterns, and the diversity of computer programs only exacerbates the problem. Several factors affect the design of a replacement policy such as hardware upgrades, memory overheads, memory access patterns, model latency, etc. The amalgamation of a fundamental concept like cache replacement with advanced machine learning algorithms provides surprising results and drives the development towards cost-effective solutions. In this paper, we review some of the machine-learning based cache replacement policies that outperformed the static heuristics.


2015 ◽  
Vol 10 (6) ◽  
pp. 620 ◽  
Author(s):  
Prapai Sridama ◽  
Somchai Prakancharoen ◽  
Nalinpat Porrawatpreyakorn
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document