local cache
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 6)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Jiajie Ren ◽  
Demin Li ◽  
Lei Zhang ◽  
Guanglin Zhang

Content-centric networks (CCNs) have become a promising technology for relieving the increasing wireless traffic demands. In this paper, we explore the scaling performance of mobile content-centric networks based on the nonuniform spatial distribution of nodes, where each node moves around its own home point and requests the desired content according to a Zipf distribution. We assume each mobile node is equipped with a finite local cache, which is applied to cache contents following a static cache allocation scheme. According to the nonuniform spatial distribution of cache-enabled nodes, we introduce two kinds of clustered models, i.e., the clustered grid model and the clustered random model. In each clustered model, we analyze throughput and delay performance when the number of nodes goes infinity by means of the proposed cell-partition scheduling scheme and the distributed multihop routing scheme. We show that the node mobility degree and the clustering behavior play the fundamental roles in the aforementioned asymptotic performance. Finally, we study the optimal cache allocation problem in the two kinds of clustered models. Our findings provide a guidance for developing the optimal caching scheme. We further perform the numerical simulations to validate the theoretical scaling laws.


2020 ◽  
Vol 10 (24) ◽  
pp. 8846
Author(s):  
Jaehwan Lee ◽  
Hyeonseong Choi ◽  
Hyeonwoo Jeong ◽  
Baekhyeon Noh ◽  
Ji Sun Shin

In a distributed deep learning system, a parameter server and workers must communicate to exchange gradients and parameters, and the communication cost increases as the number of workers increases. This paper presents a communication data optimization scheme to mitigate the decrease in throughput due to communication performance bottlenecks in distributed deep learning. To optimize communication, we propose two methods. The first is a layer dropping scheme to reduce communication data. The layer dropping scheme we propose compares the representative values of each hidden layer with a threshold value. Furthermore, to guarantee the training accuracy, we store the gradients that are not transmitted to the parameter server in the worker’s local cache. When the value of gradients stored in the worker’s local cache is greater than the threshold, the gradients stored in the worker’s local cache are transmitted to the parameter server. The second is an efficient threshold selection method. Our threshold selection method computes the threshold by replacing the gradients with the L1 norm of each hidden layer. Our data optimization scheme reduces the communication time by about 81% and the total training time by about 70% in a 56 Gbit network environment.


Author(s):  
Jixiang Zhang

In this paper, using the discrete time model, we consider the average age of all files for a cached-files-updating system where a server generates N files and transmits them to a local cache. In order that the cached files are fresh, in each time slot the server updates files with certain probabilities. The age of one file or its age of information (AoI) is defined as the time the file stays in cache since it was last time sent to cache. Assume that each file in cache has corresponding request popularity. In this paper, we obtain the distribution function of the popularity-weighted average age over all files, which gives a complete description of this average age. For the random age of single file, both the mean and its distribution have been derived before by establishing a simple Markov chain. Using the same idea, we show that an N dimensional stochastic process can be constituted to characterize the changes of N file ages simultaneously. By solving the steady-state of the resulting process, we obtain the explicit expression of stationary probability for an arbitrary state-vector. Then, the distribution function of the popularity-weighted average age can be derived by mergering a proper set of stationary probabilities. For the possible applications, the distribution function can be utilized to calculate the probability that the average age violates certain statistical guarantee.


2020 ◽  
Vol 10 (18) ◽  
pp. 6145
Author(s):  
Fawad Ahmad ◽  
Ayaz Ahmad ◽  
Irshad Hussain ◽  
Peerapong Uthansakul ◽  
Suleman Khan

The limited caching capacity of the local cache enabled Base station (BS) decreases the cache hit ratio (CHR) and user satisfaction ratio (USR). However, Cache enabled multi-tier cellular networks have been presented as a promising candidate for fifth generation networks to achieve higher CHR and USR through densification of networks. In addition to this, the cooperation among the BSs of various tiers for cached data transfer, intensify its significance many folds. Therefore, in this paper, we consider maximization of CHR and USR in a multi-tier cellular network. We formulate a CHR and USR problem for multi-tier cellular networks while putting major constraints on caching space of BSs of each tier. The unsupervised learning algorithms such as K-mean clustering and collaborative filtering have been used for clustering the similar BSs in each tier and estimating the content popularity respectively. A novel scheme such as cluster average popularity based collaborative filtering (CAP-CF) algorithm is employed to cache popular data and hence maximizing the CHR in each tier. Similarly, two novel methods such as intra-tier and cross-tier cooperation (ITCTC) and modified ITCTC algorithms have been employed in order to optimize the USR. Simulations results witness, that the proposed schemes yield significant performance in terms of average cache hit ratio and user satisfaction ratio compared to other conventional approaches.


2019 ◽  
Vol 214 ◽  
pp. 04056
Author(s):  
Alessandra Doria ◽  
Gianpaolo Carlino ◽  
Alessandro De Salvo ◽  
Bernardino Spisso ◽  
Elisabetta Vilucchi

The main goal of this work, in the context of WLCG, is to test a storage setup where the storage areas are geographically distributed and the system provides some pools behaving as data caches. Users can find data needed for their analysis in a local cache and process them locally. We first demonstrate that the distributed setup for a DPM storage is almost transparent to the users, in terms of performance and functionalities. Then, we implement a mechanism to fill the storage cache with data registered in Rucio Data Management system and we test it, running a physics analysis that gets its input data from the cache. Thus we demonstrate that the use of such a system can be useful for diskless sites with a local cache only, allowing to optimize the distribution and analysis of experimental data.


2018 ◽  
Vol 115 (32) ◽  
pp. 8099-8103 ◽  
Author(s):  
Yossi Azar ◽  
Eric Horvitz ◽  
Eyal Lubetzky ◽  
Yuval Peres ◽  
Dafna Shahaf

The problem of maintaining a local cache of n constantly changing pages arises in multiple mechanisms such as web crawlers and proxy servers. In these, the resources for polling pages for possible updates are typically limited. The goal is to devise a polling and fetching policy that maximizes the utility of served pages that are up to date. Cho and Garcia-Molina [(2003) ACM Trans Database Syst 28:390–426] formulated this as an optimization problem, which can be solved numerically for small values of n, but appears intractable in general. Here, we show that the optimal randomized policy can be found exactly in O(n⁡log⁡n) operations. Moreover, using the optimal probabilities to define in linear time a deterministic schedule yields a tractable policy that in experiments attains 99% of the optimum.


Fog Computing ◽  
2018 ◽  
pp. 284-304
Author(s):  
Marat Zhanikeev

Many years of research on Content Delivery Networks (CDNs) offers a number of effective methods for caching of content replicas or forwarding requests. However, recently CDNs have aggressively started migrating to clouds. Clouds present a new kind of distribution environment as each location can support multiple caching options varying in the level of persistence of stored content. A subclass of clouds located at network edge is referred to as fog clouds. Fog clouds help by allowing CDNs to offload popular content to network edge, closer to end users. However, due to the fact that fog clouds are extremely heterogeneous and vary wildly in network and caching performance, traditional caching technology is no longer applicable. This paper proposes a multi-level caching technology specific to fog clouds. To deal with the heterogeneity problem and, at the same time, avoid centralized control, this paper proposes a function that allows CDN services to discover local caching facilities dynamically, at runtime. Using a combination of synthetic models and real measurement dataset, this paper analyzes efficiency of offload both at the local level of individual fog locations and at the global level of the entire CDN infrastructure. Local analysis shows that the new method can reduce inter-cloud traffic by between 16 and 18 times while retaining less than 30% of total content in a local cache. Global analysis further shows that, based on existing measurement datasets, centralized optimization is preferred to distributed coordination among services.


Author(s):  
Marat Zhanikeev

Many years of research on Content Delivery Networks (CDNs) offers a number of effective methods for caching of content replicas or forwarding requests. However, recently CDNs have aggressively started migrating to clouds. Clouds present a new kind of distribution environment as each location can support multiple caching options varying in the level of persistence of stored content. A subclass of clouds located at network edge is referred to as fog clouds. Fog clouds help by allowing CDNs to offload popular content to network edge, closer to end users. However, due to the fact that fog clouds are extremely heterogeneous and vary wildly in network and caching performance, traditional caching technology is no longer applicable. This paper proposes a multi-level caching technology specific to fog clouds. To deal with the heterogeneity problem and, at the same time, avoid centralized control, this paper proposes a function that allows CDN services to discover local caching facilities dynamically, at runtime. Using a combination of synthetic models and real measurement dataset, this paper analyzes efficiency of offload both at the local level of individual fog locations and at the global level of the entire CDN infrastructure. Local analysis shows that the new method can reduce inter-cloud traffic by between 16 and 18 times while retaining less than 30% of total content in a local cache. Global analysis further shows that, based on existing measurement datasets, centralized optimization is preferred to distributed coordination among services.


Sign in / Sign up

Export Citation Format

Share Document