scholarly journals Critical Scrutiny of Page Replacement Algorithms: FIFO, Optimal and LRU

Virtual memory plays an important role in memory management of an operating system. A process or a set of processes may have a requirement of memory space that may exceed the capacity of main memory. This situation is addressed by virtual memory where a certain memory space in secondary memory is treated as primary memory, i.e., main memory is virtually extended to secondary memory. When a process requires a page, it first scans in primary memory. If it is found then, process continues to execute, otherwise a situation arises, called page fault, which is addressed by page replacement algorithms. This algorithms swaps out a page from main memory to secondary memory and replaced it with another page from secondary memory in addition to the fact that it should have minimum page faults so that considerable amount of I/O operations, required for swapping in/out of pages, can be reduced. Several algorithms for page replacement have been formulated to increase the efficiency of page replacement technique. In this paper, mainly three page replacement algorithms: FIFO, Optimal and LRU are discussed, their behavioural pattern is analysed with systematic approach and a comparative analysis of these algorithms is recorded with proper diagram.

Author(s):  
Gajanan Digambar Gaikwad

Abstract: Operating system offers a service known as memory management which manages and guides primary memory. It moves processes between disk and main memory during the execution back and forth. The process in which we provisionally moves process from primary memory to the hard disk so the memory is available for other processes. This process is known as swapping. Page replacement techniques are the methods by which the operating system concludes which memory pages to be swapped out and write to disk, whenever a page of main memory is required to be allocated. There are different policies regarding how to select a page to be swapped out when a page fault occurs to create space for new page. These Policies are called page replacement algorithms. In this paper the strategy for identifying the refresh rate for ‘Aging’ page replacement algorithm is presented and evaluated. Keywords: Aging algorithm, page replacement algorithm, refresh rate, virtual memory management.


1995 ◽  
Vol 05 (02) ◽  
pp. 239-259
Author(s):  
SU HWAN KIM ◽  
SEON WOOK KIM ◽  
TAE WON RHEE

For data analyses, it is very important to combine data with similar attribute values into a categorically homogeneous subset, called a cluster, and this technique is called clustering. Generally crisp clustering algorithms are weak in noise, because each datum should be assigned to exactly one cluster. In order to solve the problem, a fuzzy c-means, a fuzzy maximum likelihood estimation, and an optimal fuzzy clustering algorithms in the fuzzy set theory have been proposed. They, however, require a lot of processing time because of exhaustive iteration with an amount of data and their memberships. Especially large memory space results in the degradation of performance in real-time processing applications, because it takes too much time to swap between the main memory and the secondary memory. To overcome these limitations, an extended fuzzy clustering algorithm based on an unsupervised optimal fuzzy clustering algorithm is proposed in this paper. This algorithm assigns a weight factor to each distinct datum considering its occurrence rate. Also, the proposed extended fuzzy clustering algorithm considers the degree of importances of each attribute, which determines the characteristics of the data. The worst case is that the whole data has an uniformly normal distribution, which means the importance of all attributes are the same. The proposed extended fuzzy clustering algorithm has better performance than the unsupervised optimal fuzzy clustering algorithm in terms of memory space and execution time in most cases. For simulation the proposed algorithm is applied to color image segmentation. Also automatic target detection and multipeak detection are considered as applications. These schemes can be applied to any other fuzzy clustering algorithms.


Author(s):  
Pallab Banerjee ◽  
Biresh Kumar ◽  
Amarnath Singh ◽  
Shipra Sinha ◽  
Medha Sawan

Programming codes are of variable length. When the size of codes becomes greater than that of primary memory, the concept of virtual memory comes into play. As the name suggests, virtual memory allows to outstretch the use of primary memory by using storage devices such as disks. The implementation of virtual memory can be done by using the paging approach. Allocation of memory frames to each and every program is done by the operating system while loading them into the memory. Each program is segregated into pages as per the size of frames. Equal size of pages and frames enhance the usability of memory. As, the process or program which is being executed is provided with a certain amount of memory frames; therefore, swap out technique is necessary for the execution of each and every page. This swap out technique is termed as Page Replacement. There are many algorithms proposed to decide which page needs to be replaced from the frames when new pages come. In this paper, we have proposed a new page replacement technique. This new technique is based on the approach of reading and counting of the pages from secondary storage. Whenever the page fault is detected, the needed page is fetched from the secondary storage. This process of accessing the disc is slow as compared to the process in which the required page is retrieved from the primary storage. In the proposed technique, the pages having least occurrence will be replaced by the new page and the pages having same count will be replaced on the basis of LRU page replacement algorithm. In this method, the paged are retrieved from the secondary storage hence, possibility of page hit will be increased and as a result, the execution time of the processes will be decreased as the possibility of page miss will be decreased.


2016 ◽  
Vol 4 (1) ◽  
pp. 61-71
Author(s):  
Hirotaka Kawata ◽  
Gaku Nakagawa ◽  
Shuichi Oikawa

The performance of mobile devices such as smartphones and tablets has been rapidly improving in recent years. However, these improvements have been seriously affecting power consumption. One of the greatest challenges is to achieve efficient power management for battery-equipped mobile devices. To solve this problem, the authors focus on the emerging non-volatile memory (NVM), which has been receiving increasing attention in recent years. Since its performance is comparable with that of DRAM, it is possible to replace the main memory with NVM, thereby reducing power consumption. However, the price and capacity of NVM are problematic. Therefore, the authors provide a large memory space without performance degradation by combining NVM with other memory devices. In this study, they propose a design for non-volatile main memory systems that use DRAM as a swap space. This enables both high performance and energy efficient memory management through dynamic power management in NVM and DRAM.


2018 ◽  
Vol 7 (4.5) ◽  
pp. 32 ◽  
Author(s):  
Govind Prasad Arya ◽  
Devendra Prasad ◽  
Sandeep Singh Rana

The computer programmer write programming codes of any length without keeping in mind the available primary memory. This is possible if we use the concept of virtual memory. As the name suggests, virtual memory is a concept of executing a programming code of any size even having a primary memory of smaller size than the size of program to be executed. The virtual memory can be implemented using the concept of paging. The operating system allocates a number of memory frames to each program while loading into the memory. The programming code is equally divided into pages of same size as frame size. The size of pages and memory frames are retained equal for the better utilization of the memory. During the execution of program, every process is allocated limited number of memory frames; hence there is  a need of page replacements. To overcome this limitation, a number of page replacement techniques had suggested by the researchers. In this paper, we have proposed an modified page replacement technique, which is based on the concept of block reading of pages from the secondary storage. The disc access is very slow as compared to the access from primary memory. Whenever there is a page fault, the required page is retrived from the secondary storage. The numerous page faults increase the execution time of process. In the proposed methodology, a number of pages, which is equal to the allotted memory frames, are read every time when there is a page fault instead of reading a single page at a time. If a block of pages has fetched from secondary storage, it will definitely increases the possibilities of page hit and as a result, it will improve the hit ratio for the processes.  


2013 ◽  
Vol 10 (1) ◽  
pp. 173-195 ◽  
Author(s):  
George Lagogiannis ◽  
Nikos Lorentzos ◽  
Alexander Sideridis

Indexing moving objects usually involves a great amount of updates, caused by objects reporting their current position. In order to keep the present and past positions of the objects in secondary memory, each update introduces an I/O and this process is sometimes creating a bottleneck. In this paper we deal with the problem of minimizing the number of I/Os in such a way that queries concerning the present and past positions of the objects can be answered efficiently. In particular we propose two new approaches that achieve an asymptotically optimal number of I/Os for performing the necessary updates. The approaches are based on the assumption that the primary memory suffices for storing the current positions of the objects.


2021 ◽  
Vol 11 (18) ◽  
pp. 8476
Author(s):  
June Choi ◽  
Jaehyun Lee ◽  
Jik-Soo Kim ◽  
Jaehwan Lee

In this paper, we present several optimization strategies that can improve the overall performance of the distributed in-memory computing system, “Apache Spark”. Despite its distributed memory management capability for iterative jobs and intermediate data, Spark has a significant performance degradation problem when the available amount of main memory (DRAM, typically used for data caching) is limited. To address this problem, we leverage an SSD (solid-state drive) to supplement the lack of main memory bandwidth. Specifically, we present an effective optimization methodology for Apache Spark by collectively investigating the effects of changing the capacity fraction ratios of the shuffle and storage spaces in the “Spark JVM Heap Configuration” and applying different “RDD Caching Policies” (e.g., SSD-backed memory caching). Our extensive experimental results show that by utilizing the proposed optimization techniques, we can improve the overall performance by up to 42%.


2016 ◽  
pp. 607-623
Author(s):  
Hemant Kumar Mehta

This chapter presents a toolkit for evaluation of resource management algorithms developed for Grid computing. This simulator named as EcoGrid and it is devised to support large number of resource or computing nodes and processes. Generally, grid simulators represent each resource using a thread that occupies large amount of space on the thread stack in main memory. However, EcoGrid models each node by an object instead of a thread. Memory space used by an object is much smaller than a thread, thus EcoGrid is highly scalable as compared to state-of-the-art simulators. EcoGrid is dynamically configurable and works with real as well as synthetic workloads. The simulator is bundled with a synthetic load generator that generates the workload using appropriate statistical distributions.


Author(s):  
Hemant Kumar Mehta

This paper presents a toolkit for evaluation of resource management algorithms developed for Grid computing. This simulator named as EcoGrid and it is devised to support large number of resource or computing nodes and processes. Generally, grid simulators represent each resource using a thread that occupies large amount of space on the thread stack in main memory. However, EcoGrid models each node by an object instead of a thread. Memory space used by an object is much smaller than a thread, thus EcoGrid is highly scalable as compared to state-of-the-art simulators. EcoGrid is dynamically configurable and works with real as well as synthetic workloads. The simulator is bundled with a synthetic load generator that generates the workload using appropriate statistical distributions.


Sign in / Sign up

Export Citation Format

Share Document