cache invalidation
Recently Published Documents


TOTAL DOCUMENTS

105
(FIVE YEARS 8)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Vol 9 (4) ◽  
pp. 0-0

In the location aware services, past mobile device cache invalidation-replacement practises used are ineffective if the client travel route varies rapidly. In addition, in terms of storage expense, previous cache invalidation-replacement policies indicate high storage overhead. These limitations of past policies are inspiration for this research work. The paper describes the models to solve the aforementioned challenges using two different approaches separately for predicting the future path for the user movement. In the first approach, the most prevalent Sequential Pattern Mining & Clustering (SPMC) technique is used to pre-process the user's movement trajectory and find out the pattern that appears frequently. In the second approach, frequent patterns are forwarded into the Mobility Markov Chain & Matrix-(MMCM) algorithm leading to a reduction in the size of candidate sets and, therefore, efficiency enhancement of mining sequence patterns. Analytical results show significant caching performance improvement compared to previous caching policies.


2021 ◽  
Author(s):  
Srimanchari P ◽  
Anandharaj G

Abstract Caching is a well established technique to improve the efficiency of data access. This research paper introduces a Hybrid and Adaptive Caching (HAC) approach to cache the data item based on the varying size, and, Time-to-Live (TTL) based invalidation of the data item in a mobile computing environment. Mobile nodes establish single-hop communication with the base station and ad-hoc peer to peer communication with other neighbor nodes in the network to access data items. The proposed work adjusts the caching functionality level based on the size of the data item and stores the cached data item in two different storage systems. The cache of each node is separated into Temporary Buffer (TB) and Permanent Buffer (PB) to improve the data access efficiency. This approach is based on the fact that the smaller size data (e.g. stocks) are updated for shorter Time-to-Live (TTL) whereas the larger size data (e.g. video) are updated only for longer TTL. This proposed work also suggests an adaptive cache replacement and cache invalidation technique to resolve the issues regarding bandwidth utilization and data availability. In cache replacement technique, the cached data item is effectively replaced based on the size of the data item and TTL value. A timestamp-based cache invalidation strategy where the cached data is validated according to the update history of the data items has also been introduced in this paper. The threshold values have greater impact on the system performance. Therefore, the threshold values are fine tuned such that they do not affect the system performance. The proposed approaches significantly improve the query latency, cache hit ratio and efficiently utilize the broadcast bandwidth. The simulation result proves that the proposed work outperforms the existing caching techniques.


Author(s):  
Quan Zheng ◽  
Tao Yang ◽  
Yuanzhi Kan ◽  
Xiaobin Tan ◽  
Jian Yang ◽  
...  
Keyword(s):  

Inode is one of the subsystems of WAFL(Write Anywhere File Layout) file system. Inode cache is a dynamic subsystem that is percentage factor of available memory. Based on different workflows and the datasets inode cache grows and shrinks. Based on the study of customer related issues it is found that deploying such a workload and datasets at the scale, that customers typically deploy and exercise inode cache for the whole duration of test is very challenging, considering quality assurance test typically focuses on multiple subsystems. Inode cache behavior differs with steady state versus performance disruptive workflows such as volume offline, volume online, volume migration and backup/vault use cases. Based on the behavior observed on the internal test systems it is found inode cache disruptive workflows are exercised only during certain stages but not repeatedly for the duration of test and also it is hard to find out which volume is experiencing performance issues due to inode cache invalidation/shrink/rewarning. In this paper, trying to exercise the performance behavior of inode subsystem like the way customer does and try to monitor and model the subsystem using automation. Here considering the different key attributes and typical operations that effect the inode cache behavior and some of the interested counters statistics that need to be monitored for analyzing the performance behavior of inode cache. Exercising inode cache operations requires constant focus on how the inode cache is performing. Repeat and Rerun some of the targeted workflows for inode cache population invalidation/ shrink operations at constant intervals to model the behavior of the inode subsytem.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 80074-80085 ◽  
Author(s):  
Yuanzhi Kan ◽  
Quan Zheng ◽  
Jian Yang ◽  
Xiaobin Tan

Internet based Vehicular (iVANET) ad-hoc networks are meticulously especial case of normal VANET. It is basically made of a combined wired Internet as well as vehicular ad hoc network for developing a new baby-boom of omnipresent and ubiquitous computing. The Internet connectivity is usually extended to V2I (vehicle to infrastructure) communication whilst ad-hoc networks are used in vehicle to vehicle (V2V) communication. The latency is one of the main matters of concern in VANET. By minimizing distance between data source and the remote vehicle through rectified caching technique along with redefined cache lookup mechanism, the latency can be shortened by a significant factor in iVANET environment. In this paper various cache invalidation schemes are studied and analyzed. Exploring the possibilities of caching schemes which can be hybridized or mutated, paper introduces an algorithmic proposal along with redefined services mechanism for cache lookup and invalidation, which strives for achieving low latency with reduced negative acknowledgement (NACK) in the network. This paper introduces a rectified algorithm for guaranteed delivery of the queried data and efficiently invalidating cache contents at different levels of hierarchy. The proposed work is anticipated to fructify the network performance minimizing the cost and bandwidth utilization during cache invalidation and hence guarantees improved quality of service (QoS).


Sign in / Sign up

Export Citation Format

Share Document