prefetching technique
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 4)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Vol 11 (2) ◽  
pp. 35-39
Author(s):  
S. Selvam

This paper presents a creativity data prefetching scheme on the loading servers in distributed file systems for cloud computing. The server will get and piggybacked the frequent data from the client system, after analyzing the fetched data is forward to the client machine from the server. To place this technique to work, the data about client nodes is piggybacked onto the real client I/O requests, and then forwarded to the relevant storage server. Next, dual prediction algorithms have been proposed to calculation future block access operations for directing what data should be fetched on storage servers in advance. Finally, the prefetching data can be pressed to the relevant client device from the storage server. Over a series of evaluation experiments with a group of application benchmarks, we have demonstrated that our presented initiative prefetching technique can benefit distributed file systems for cloud environments to achieve better I/O performance. In particular, configuration-limited client machines in the cloud are not answerable for predicting I/O access operations, which can certainly contribute to preferable system performance on them.


2021 ◽  
Vol 18 (3) ◽  
pp. 1-22
Author(s):  
Ricardo Alves ◽  
Stefanos Kaxiras ◽  
David Black-Schaffer

Achieving low load-to-use latency with low energy and storage overheads is critical for performance. Existing techniques either prefetch into the pipeline (via address prediction and validation) or provide data reuse in the pipeline (via register sharing or L0 caches). These techniques provide a range of tradeoffs between latency, reuse, and overhead. In this work, we present a pipeline prefetching technique that achieves state-of-the-art performance and data reuse without additional data storage, data movement, or validation overheads by adding address tags to the register file. Our addition of register file tags allows us to forward (reuse) load data from the register file with no additional data movement, keep the data alive in the register file beyond the instruction’s lifetime to increase temporal reuse, and coalesce prefetch requests to achieve spatial reuse. Further, we show that we can use the existing memory order violation detection hardware to validate prefetches and data forwards without additional overhead. Our design achieves the performance of existing pipeline prefetching while also forwarding 32% of the loads from the register file (compared to 15% in state-of-the-art register sharing), delivering a 16% reduction in L1 dynamic energy (1.6% total processor energy), with an area overhead of less than 0.5%.


2020 ◽  
Vol 171 ◽  
pp. 1970-1978
Author(s):  
D.N. Shashidhara ◽  
D.N. Chandrappa ◽  
C Puttamadappa

2017 ◽  
Vol 77 (12) ◽  
pp. 15913-15928 ◽  
Author(s):  
Sivakumar Ganapathi ◽  
Venkatachalam Varadharajan

2016 ◽  
Vol 60 (3) ◽  
pp. 444-456 ◽  
Author(s):  
Prabavathy Balasundaram ◽  
Chitra Babu ◽  
Subha Devi M

Author(s):  
Kuttuva Rajendran Baskaran ◽  
Chellan Kalaiarasan

Combining Web caching and Web pre-fetching results in improving the bandwidth utilization, reducing the load on the origin server and reducing the delay incurred in accessing information. Web pre-fetching is the process of fetching the Web objects from the origin server which has more likelihood of being used in future. The fetched contents are stored in the cache. Web caching is the process of storing the popular objects ”closer” to the user so that they can be retrieved faster. In the literature many interesting works have been carried out separately for Web caching and Web pre-fetching. In this work, clustering technique is used for pre-fetching and SVM-LRU technique forWeb caching and the performance is measured in terms of Hit Ratio (HR) and Byte Hit Ratio (BHR). With the help of real data, it is demonstrated that the above approach is superior to the method of combining clustering based prefetching technique with traditional LRU page replacement method for Web caching.


2014 ◽  
Vol 18 (5) ◽  
pp. 661-675 ◽  
Author(s):  
V. Ginting ◽  
F. Pereira ◽  
A. Rahunanthan

Sign in / Sign up

Export Citation Format

Share Document