cooperative cache
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 11)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Dapeng Man ◽  
Yao Wang ◽  
Hanbo Wang ◽  
Jiafei Guo ◽  
Jiguang Lv ◽  
...  

Information-Centric Networking with caching is a very promising future network architecture. The research on its cache deployment strategy is divided into three categories, namely, noncooperative cache, explicit collaboration cache, and implicit collaboration cache. Noncooperative caching can cause problems such as high content repetition rate in the web cache space. Explicit collaboration caching generally reflects the best caching effect but requires a lot of communication to satisfy the exchange of cache node information and depends on the controller to perform the calculation. On this basis, implicit cooperative caching can reduce the information exchange and calculation between cache nodes while maintaining a good caching effect. Therefore, this paper proposes an on-path implicit cooperative cache deployment method based on the dynamic LRU-K cache replacement strategy. This method evaluates the cache nodes based on their network location and state and selects the node with the best state value on the transmission path for caching. Each request will only select one or two nodes for caching on the request path to reduce the redundancy of the data. Simulation experiments show that the cache deployment method based on the state and location of the cache node can improve the hit rate and reduce the average request length.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Lincan Li ◽  
Chiew Foong Kwong ◽  
Qianyu Liu ◽  
Pushpendu Kar ◽  
Saeid Pourroostaei Ardakani

Mobile edge caching is an emerging approach to manage high mobile data traffic in fifth-generation wireless networks that reduces content access latency and offloading data traffic of backhaul links. This paper proposes a novel cooperative caching policy based on long short-term memory (LSTM) neural networks considering the characteristics between the features of the heterogeneous layers and the user moving speed. Specifically, LSTM is applied to predict content popularity. Size-weighted content popularity is utilised to balance the impact of the predicted content popularity and content size. We also consider the moving speeds of mobile users and introduce a two-level caching architecture consisting of several small base stations (SBSs) and macro base stations (MBSs). To avoid content requests of fast-moving users affecting the content popularity distribution of the SBS since fast-moving users frequently handover among SBSs, fast-moving users are served by MBSs no matter which SBS they are in. SBSs serve low-speed users, and SBSs in the same cluster can communicate with one another. The simulation results show that compared to common cache methods, for example, the least frequently used and least recently used methods, our proposed policy is at least 8.9% lower and 6.8% higher in terms of the average content access latency and offloading ratio, respectively.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Haizhou Bao ◽  
Yiming Huo ◽  
Chuanhe Huang ◽  
Xiaodai Dong ◽  
Wanyu Qiu

Cellular vehicle-to-everything- (C-V2X-) based communications can support various content-oriented applications and have gained significant progress in recent years. However, the limited backhaul bandwidth and dynamic topology make it difficult to obtain the multimedia service with high-reliability and low-latency communication in C-V2X networks, which may degrade the quality of experience (QoE). In this paper, we propose a novel cluster-based cooperative cache deployment and coded delivery strategy for C-V2X networks to improve the cache hit ratio and response time, reduce the request-response delay, and improve the bandwidth efficiency. To begin with, we design an effective vehicle cluster method. Based on the constructed cluster, we propose a two-level cooperative cache deployment approach to cache the frequently requested files on the edge nodes, LTE evolved NodeB (eNodeB) and cluster head (CH), to maximize the overall cache hit ratio. Furthermore, we propose an effective coded delivery strategy to minimize the network load and the ratio of redundant files. Simulation results demonstrate that our proposed method can effectively reduce the average response delay and network load and improve both the hit ratio and the ratio of redundant files.


2020 ◽  
Vol 9 (12) ◽  
pp. 2112-2115
Author(s):  
Dongsheng Zheng ◽  
Yingyang Chen ◽  
Mingxi Yin ◽  
Bingli Jiao

2019 ◽  
Vol 13 (17) ◽  
pp. 2786-2796
Author(s):  
Yue Li ◽  
Ye Wang ◽  
Peng Yuan ◽  
Qinyu Zhang ◽  
Zhihua Yang

2019 ◽  
Vol 8 (3) ◽  
pp. 7432-7439

Availability of content over the web is increasing exponentially. The demand for content by users is also increasing rapidly. The problem of making the right content available to user at the right time will continue to be a crucial issue. As variety of contents are available and variety of users are involved, there is no single way of matching the availability versus need and deliver content instantly especially in a limited mobile environment. Hence a hybrid method is proposed in this paper by combining the different techniques such as caching, pre-fetching and cache sharing with noise reduction to improve the overall performance of mobile for optimal cache memory utilisation, efficient bandwidth utilization, network traffic reduction and latency reduction. Efficiency of mobile caching and pre-fetching is improved using Enhanced Bloom Filter technique and data is shared among cooperative users by establishing a voluntary hub. The unwanted contents in the web page can be considered as noise which is removed when storing the web pages in cache or pre-fetch area. The success of the proposed method greatly depends on the hit ratio of contents rendered locally rather than getting it from server. In order to reduce server hits, sharing the contents of cache and pre-fetch area amongst mobile users is effective. Whenever any user requires new content, even if it is not available in browser cache or local cache of that user, the content can be rendered from the cache or pre-fetch area of collaborative mobile users rather than hitting the server. This hybrid cooperative cache sharing and pre-fetching for accessing the required contents improve the overall performance and hit ratio than the existing methods.


Sign in / Sign up

Export Citation Format

Share Document