AUGMENTED FIFO CACHE REPLACEMENT POLICIES FOR LOW-POWER EMBEDDED PROCESSORS

2009 ◽  
Vol 18 (06) ◽  
pp. 1081-1092 ◽  
Author(s):  
SANGYEUN CHO ◽  
LORY AL MOAKAR

This paper explores a family of augmented FIFO replacement policies for highly setassociative caches that are common in low-power embedded processors. In such processors, the implementation cost and complexity of the replacement policy is as important as the cache hit rate. By exploiting the cache hit way information between two replacements, the proposed replacement schemes reduce cache misses by 1% to 18% on average depending on the cache configuration, compared with the conventional FIFO policy. The proposed schemes come at a small implementation cost of additional state bits and control logic. The reduction in cache misses directly translates into data access energy savings of 1% to 15% on average, depending on the cache configuration. Our work suggests that there is room for improving the popular FIFO policy at a small cost.

Author(s):  
Mary Magdalene Jane.F ◽  
R. Nadarajan ◽  
Maytham Safar

Data caching in mobile clients is an important technique to enhance data availability and improve data access time. Due to cache size limitations, cache replacement policies are used to find a suitable subset of items for eviction from the cache. In this paper, the authors study the issues of cache replacement for location-dependent data under a geometric location model and propose a new cache replacement policy RAAR (Re-entry probability, Area of valid scope, Age, Rate of Access) by taking into account the spatial and temporal parameters. Mobile queries experience a popularity drift where the item loses its popularity after the user exhausts the corresponding service, thus calling for a scenario in which once popular documents quickly become cold (small active sets). The experimental evaluations using synthetic datasets for regular and small active sets show that this replacement policy is effective in improving the system performance in terms of the cache hit ratio of mobile clients.


Author(s):  
Prashant Kumar ◽  
Naveen Chauhan ◽  
LK Awasthi ◽  
Narottam Chand

Mobile Adhoc Networks (MANETs) are autonomously structured multi-hop wireless links in peer to peer fashion without aid of any infrastructure network. In MANETs network topology is dynamic, as nodes are mobile. Due to this dynamic topology and multi-hop environment data availability in MANETs is low. Caching of frequently accessed data in ad hoc networks is a potential technique that can improve the data access, performance and availability. While caching the new data items, it is very important which data item is to be removed, as in MANETs the data is not stored only on behalf of caching node but interest of the vicinity is also considered. In this paper the authors presented a new cache replacement policy for MANETs. This policy is based on multi-parameter value called SAT. We simulate the proposed work on OMNET++ and the simulation results shows that proposed replacement policy helps to improve the data availability in network.


Author(s):  
Jianliang Xu ◽  
Haibo Hu ◽  
Xueyan Tang ◽  
Baihua Zheng

This chapter introduces advanced client-side data-caching techniques to enhance the performance of mobile data access. The authors address three mobile caching issues. The first is the necessity of a cache replacement policy for realistic wireless data-broadcasting services. The authors present the Min-SAUD policy, which takes into account the cost of ensuring cache consistency before each cached item is used. Next, the authors discuss the caching issues for an emerging mobile data application, that is, location-dependent information services (LDISs). In particular, they consider data inconsistency caused by client movements and describe several location-dependent cache invalidation schemes. Then, as the spatial property of LDISs also brings new challenges for cache replacement policies, the authors present two novel cache replacement policies, called PA and PAID, for location-dependent data.


2018 ◽  
Vol 175 ◽  
pp. 04028
Author(s):  
Xinjiang Huang ◽  
Chenggang Liu ◽  
Yuyang Miao

The recommissioning is performed for a research and development (R&D) center located in Suzhou. The hardware problems of Heating, Ventilation and Air Conditioning (HVAC) system and software problems of Electrical Monitoring and Control System (EMCS) were found. The recommissioning measures were generated and control logic in EMCS was optimized and improved. The saving of about 121kW for chilled water consumption, 181kW for hot water consumption and 1450 kWh/D for electricity consumption from recommissioning were achieved from recommissioning measures.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-24
Author(s):  
Chih-Kai Huang ◽  
Shan-Hsiang Shen

The next-generation 5G cellular networks are designed to support the internet of things (IoT) networks; network components and services are virtualized and run either in virtual machines (VMs) or containers. Moreover, edge clouds (which are closer to end users) are leveraged to reduce end-to-end latency especially for some IoT applications, which require short response time. However, the computational resources are limited in edge clouds. To minimize overall service latency, it is crucial to determine carefully which services should be provided in edge clouds and serve more mobile or IoT devices locally. In this article, we propose a novel service cache framework called S-Cache , which automatically caches popular services in edge clouds. In addition, we design a new cache replacement policy to maximize the cache hit rates. Our evaluations use real log files from Google to form two datasets to evaluate the performance. The proposed cache replacement policy is compared with other policies such as greedy-dual-size-frequency (GDSF) and least-frequently-used (LFU). The experimental results show that the cache hit rates are improved by 39% on average, and the average latency of our cache replacement policy decreases 41% and 38% on average in these two datasets. This indicates that our approach is superior to other existing cache policies and is more suitable in multi-access edge computing environments. In the implementation, S-Cache relies on OpenStack to clone services to edge clouds and direct the network traffic. We also evaluate the cost of cloning the service to an edge cloud. The cloning cost of various real applications is studied by experiments under the presented framework and different environments.


Sign in / Sign up

Export Citation Format

Share Document