scholarly journals Machine Learning-Based Cache Replacement Policies: A Survey

Author(s):  
Pratheeksha P ◽  
◽  
Revathi S A ◽  

Despite extensive developments in improving cache hit rates, designing an optimal cache replacement policy that mimics Belady’s algorithm still remains a challenging task. Existing standard static replacement policies does not adapt to the dynamic nature of memory access patterns, and the diversity of computer programs only exacerbates the problem. Several factors affect the design of a replacement policy such as hardware upgrades, memory overheads, memory access patterns, model latency, etc. The amalgamation of a fundamental concept like cache replacement with advanced machine learning algorithms provides surprising results and drives the development towards cost-effective solutions. In this paper, we review some of the machine-learning based cache replacement policies that outperformed the static heuristics.

2021 ◽  
Vol 11 (3) ◽  
pp. 250-255
Author(s):  
Yinyin Wang ◽  
◽  
Yuwang Yang ◽  
Qingguang Wang

An efficient intelligent cache replacement policy suitable for picture archiving and communication systems (PACS) was proposed in this work. By combining the Support vector machine (SVM) with the classic least recently used (LRU) cache replacement policy, we have created a new intelligent cache replacement policy called SVM-LRU. The SVM-LRU policy is unlike conventional cache replacement policies, which are solely dependent on the intrinsic properties of the cached items. Our PACS-oriented SVM-LRU algorithm identifies the variables that affect file access probabilities by mining medical data. The SVM algorithm is then used to model the future access probabilities of the cached items, thus improving cache performance. Finally, a simulation experiment was performed using the trace-driven simulation method. It was shown that the SVM-LRU cache algorithm significantly improves PACS cache performance when compared to conventional cache replacement policies like LRU, LFU, SIZE and GDS.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-24
Author(s):  
Chih-Kai Huang ◽  
Shan-Hsiang Shen

The next-generation 5G cellular networks are designed to support the internet of things (IoT) networks; network components and services are virtualized and run either in virtual machines (VMs) or containers. Moreover, edge clouds (which are closer to end users) are leveraged to reduce end-to-end latency especially for some IoT applications, which require short response time. However, the computational resources are limited in edge clouds. To minimize overall service latency, it is crucial to determine carefully which services should be provided in edge clouds and serve more mobile or IoT devices locally. In this article, we propose a novel service cache framework called S-Cache , which automatically caches popular services in edge clouds. In addition, we design a new cache replacement policy to maximize the cache hit rates. Our evaluations use real log files from Google to form two datasets to evaluate the performance. The proposed cache replacement policy is compared with other policies such as greedy-dual-size-frequency (GDSF) and least-frequently-used (LFU). The experimental results show that the cache hit rates are improved by 39% on average, and the average latency of our cache replacement policy decreases 41% and 38% on average in these two datasets. This indicates that our approach is superior to other existing cache policies and is more suitable in multi-access edge computing environments. In the implementation, S-Cache relies on OpenStack to clone services to edge clouds and direct the network traffic. We also evaluate the cost of cloning the service to an edge cloud. The cloning cost of various real applications is studied by experiments under the presented framework and different environments.


Author(s):  
Pratyush Kaware

In this paper a cost-effective sensor has been implemented to read finger bend signals, by attaching the sensor to a finger, so as to classify them based on the degree of bent as well as the joint about which the finger was being bent. This was done by testing with various machine learning algorithms to get the most accurate and consistent classifier. Finally, we found that Support Vector Machine was the best algorithm suited to classify our data, using we were able predict live state of a finger, i.e., the degree of bent and the joints involved. The live voltage values from the sensor were transmitted using a NodeMCU micro-controller which were converted to digital and uploaded on a database for analysis.


2018 ◽  
Vol 15 (2) ◽  
pp. 20171099-20171099 ◽  
Author(s):  
Duk-Jun Bang ◽  
Min-Kwan Kee ◽  
Hong-Yeol Lim ◽  
Gi-Ho Park

Energies ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7609
Author(s):  
Muhammad Asif Ali Rehmani ◽  
Saad Aslam ◽  
Shafiqur Rahman Tito ◽  
Snjezana Soltic ◽  
Pieter Nieuwoudt ◽  
...  

Next-generation power systems aim at optimizing the energy consumption of household appliances by utilising computationally intelligent techniques, referred to as load monitoring. Non-intrusive load monitoring (NILM) is considered to be one of the most cost-effective methods for load classification. The objective is to segregate the energy consumption of individual appliances from their aggregated energy consumption. The extracted energy consumption of individual devices can then be used to achieve demand-side management and energy saving through optimal load management strategies. Machine learning (ML) has been popularly used to solve many complex problems including NILM. With the availability of the energy consumption datasets, various ML algorithms have been effectively trained and tested. However, most of the current methodologies for NILM employ neural networks only for a limited operational output level of appliances and their combinations (i.e., only for a small number of classes). On the contrary, this work depicts a more practical scenario where over a hundred different combinations were considered and labelled for the training and testing of various machine learning algorithms. Moreover, two novel concepts—i.e., thresholding/occurrence per million (OPM) along with power windowing—were utilised, which significantly improved the performance of the trained algorithms. All the trained algorithms were thoroughly evaluated using various performance parameters. The results shown demonstrate the effectiveness of thresholding and OPM concepts in classifying concurrently operating appliances using ML.


Sign in / Sign up

Export Citation Format

Share Document