scholarly journals Intelligent data cache based on content popularity and user location for Content Centric Networks

Author(s):  
Hsin-Te Wu ◽  
Hsin-Hung Cho ◽  
Sheng-Jie Wang ◽  
Fan-Hsun Tseng

AbstractContent cache as well as data cache is vital to Content Centric Network (CCN). A sophisticated cache scheme is necessary but unsatisfied currently. Existing content cache scheme wastes router’s cache capacity due to redundant replica data in CCN routers. The paper presents an intelligent data cache scheme, viz content popularity and user location (CPUL) scheme. It tackles the cache problem of CCN routers for pursuing better hit rate and storage utilization. The proposed CPUL scheme not only considers the location where user sends request but also classifies data into popular and normal content with correspond to different cache policies. Simulation results showed that the CPUL scheme yields the highest cache hit rate and the lowest total size of cache data with compared to the original cache scheme in CCN and the Most Popular Content (MPC) scheme. The CPUL scheme is superior to both compared schemes in terms of around 8% to 13% higher hit rate and around 4% to 16% lower cache size. In addition, the CPUL scheme achieves more than 20% and 10% higher cache utilization when the released cache size increases and the categories of requested data increases, respectively.

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Jiequ Ji ◽  
Kun Zhu ◽  
Ran Wang ◽  
Bing Chen ◽  
Chen Dai

Caching popular contents at base stations (BSs) has been regarded as an effective approach to alleviate the backhaul load and to improve the quality of service. To meet the explosive data traffic demand and to save energy consumption, energy efficiency (EE) has become an extremely important performance index for the 5th generation (5G) cellular networks. In general, there are two ways for improving the EE for caching, that is, improving the cache-hit rate and optimizing the cache size. In this work, we investigate the energy efficient caching problem in backhaul-aware cellular networks jointly considering these two approaches. Note that most existing works are based on the assumption that the content catalog and popularity are static. However, in practice, content popularity is dynamic. To timely estimate the dynamic content popularity, we propose a method based on shot noise model (SNM). Then we propose a distributed caching policy to improve the cache-hit rate in such a dynamic environment. Furthermore, we analyze the tradeoff between energy efficiency and cache capacity for which an optimization is formulated. We prove its convexity and derive a closed-form optimal cache capacity for maximizing the EE. Simulation results validate the proposed scheme and show that EE can be improved with appropriate choice of cache capacity.


2021 ◽  
Vol 44 ◽  
pp. 101238
Author(s):  
Yancheng Ji ◽  
Xiao Zhang ◽  
Wenfei Liu ◽  
Guoan Zhang

Author(s):  
Yul Chu ◽  
Marven Calagos

This paper proposes a buffered dual-access-mode cache to reduce power consumption for highly-associative caches in modern embedded systems. The proposed scheme consists of a MRU (most recently used) buffer table and a single cache structure to implement two accessing modes, phased mode and way-prediction mode. The proposed scheme shows better access time and lower power consumption than two popular low-power caches, phased cache and way-prediction cache. The authors used Cacti and SimpleScalar simulators to evaluate the proposed cache scheme by using SPEC benchmark programs. The experimental results show that the proposed cache scheme improves the EDP (energy delay product) up to 40% for instruction cache and up to 42% for data cache compared to way-prediction cache, which performs better than phased cache.


2013 ◽  
Vol 433-435 ◽  
pp. 1702-1708
Author(s):  
Guo Yin Zhang ◽  
Bin Tang ◽  
Xiang Hui Wang ◽  
Yan Xia Wu

In-network caching is one of the key aspects of content-centric networks (CCN), while the cache replacement algorithm of LRU does not consider the relation between the cache contents and its neighbor nodes in the cache replacement process, which reduced the efficiency of the cache. In this paper, a Neighbor-Referencing Cooperative Cache Policy (NRCCP) in CCN has been proposed to check whether the neighbors have cached the content. Node will cache the content while none of its neighbors has cached it, therefore reduce redundancy of cached content and increase the variety of contents. Simulation results show that NRCCP has better performance, as the network path had more caching ability and more content popularity densely distributed.


Author(s):  
Shiwei Lai ◽  
Rui Zhao ◽  
Yulin Wang ◽  
Fusheng Zhu ◽  
Junjuan Xia

AbstractIn this paper, we study the cache prediction problem for mobile edge networks where there exist one base station (BS) and multiple relays. For the proposed mobile edge computing (MEC) network, we propose a cache prediction framework to solve the problem of contents prediction and caching based on neural networks and relay selection, by exploiting users’ history request data and channels between the relays and users. The proposed framework is then trained to learn users’ preferences by using the users’ history requested data, and several caching policies are proposed based on the channel conditions. The cache hit rate and latency are used to measure the performance of the proposed framework. Simulation results demonstrate the effectiveness of the proposed framework, which can maximize the cache hit rate and meanwhile minimize the latency for the considered MEC networks.


1994 ◽  
Vol 38 (5) ◽  
pp. 503-524
Author(s):  
D. J. Shippy ◽  
T. W. Griffith

2021 ◽  
Vol 8 (1) ◽  
pp. 69
Author(s):  
Tanwir Tanwir ◽  
Parma Hadi Rantelinggi ◽  
Sri Widiastuti

<p>Algoritma pergantian adalah suatu mekanisme pergantian objek dalam cache yang lama dengan objek baru, dengan mekanisme  melakukan penghapusan objek sehingga mengurangi penggunaan bandwidth dan server load. Penghapusan dilakukan apabila cache penuh sehingga penyimpanan entri baru diperlukan. Secara umum algoritma FIFO, LRU dan LFU sering digunakan dalam pergantian objek, akan tetapi diperoleh suatu objek yang sering digunakan namun terhapus dalam pergantian cache sedangkan objek tersebut masih digunakan, akibatnya pada waktu klien melakukan permintaan dibutuhkan waktu yang lama dalam browsing objek. Untuk mengatasi masalah tersebut dilakukan kombinasi algoritma pergantian cache Multi-Rule Algorithm, dalam bentuk algoritma kombinasi ganda FIFO-LRU dan triple FIFO-LRU-LFU. Algoritma Mural (Multi-Rule Algorithm) menghasilkan respon pada cache size 200 MB dengan waktu tanggapan rata-rata berturut-turut 56,33 dan 42 ms, sedangkan pada algoritma tunggal memerlukan waktu tanggapan rata-rata 77 ms. Sehingga Multi-Rule Algorithm dapat meningkatkan kinerja terhadap waktu penundaan, throughput, dan hit rate. Dengan demikian, algoritma pergantian cache Mural, sangat direkomendasikan untuk meningkatkan akses klien.</p><p> </p><p class="Judul2"><em>Abstract</em></p><p class="Abstract">Substitution algorithm is a mechanism to replace objects in the old cache with new objects, with a mechanism to delete objects so that it reduces bandwidth usage and server load. Deletion is done when the cache is full so saving new entries is needed. In general, FIFO, LRU and LFU algorithms are often used in object changes, but an object that is often used but is deleted in the cache changes while the object is still being used, consequently when the client makes a request it takes a long time to browse the object. To overcome this problem a combination of Multi-Rule Algorithm cache replacement algorithms is performed, in the form of a double combination algorithm FIFO-LRU and triple FIFO-LRU-LFU. The Mural algorithm (Multi-Rule Algorithm) produces a response on a cache size of 200 MB with an average response time of 56.33 and 42 ms respectively, whereas a single algorithm requires an average response time of 77 ms. So the Multi-Rule Algorithm can improve the performance of the delay, throughput, and hit rate. Thus, the Mural cache change algorithm, is highly recommended to improve client access.</p><p><br /><em></em></p>


Sign in / Sign up

Export Citation Format

Share Document