A Survey Of Architectural Approaches for Data Compression in Cache and Main Memory Systems

2016 ◽  
Vol 27 (5) ◽  
pp. 1524-1536 ◽  
Author(s):  
Sparsh Mittal ◽  
Jeffrey S. Vetter
Author(s):  
Alexander Thomasian

Data compression is storing data such that it requires less space than usual. Data compression has been effectively used in storing data in a compressed form on magnetic tapes, disks, and even main memory. In many cases, updated data cannot be stored in place when it is not compressible to the same or smaller size. Compression also reduces the bandwidth requirements in transmitting (program) code, data, text, images, speech, audio, and video. The transmission may be from main memory to the CPU and its caches, from tape and disk into main memory, or over local, metropolitan, and wide area networks. When data compression is used, transmission time improves or, conversely, the required transmission bandwidth is reduced. Two excellent texts on this topic are Sayood (2002) and Witten, Bell, and Moffat (1999).


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2158
Author(s):  
Jeong-Geun Kim ◽  
Shin-Dug Kim ◽  
Su-Kyung Yoon

This research is to design a Q-selector-based prefetching method for a dynamic random-access memory (DRAM)/ Phase-change memory (PCM)hybrid main memory system for memory-intensive big data applications generating irregular memory accessing streams. Specifically, the proposed method fully exploits the advantages of two-level hybrid memory systems, constructed as DRAM devices and non-volatile memory (NVM) devices. The Q-selector-based prefetching method is based on the Q-learning method, one of the reinforcement learning algorithms, which determines a near-optimal prefetcher for an application’s current running phase. For this, our model analyzes real-time performance status to set the criteria for the Q-learning method. We evaluate the Q-selector-based prefetching method with workloads from data mining and data-intensive benchmark applications, PARSEC-3.0 and graphBIG. Our evaluation results show that the system achieves approximately 31% performance improvement and increases the hit ratio of the DRAM-cache layer by 46% on average compared to a PCM-only main memory system. In addition, it achieves better performance results compared to the state-of-the-art prefetcher, access map pattern matching (AMPM) prefetcher, by 14.3% reduction of execution time and 12.89% of better CPI enhancement.


Algorithms ◽  
2019 ◽  
Vol 12 (9) ◽  
pp. 197 ◽  
Author(s):  
Sebastian Götschel ◽  
Martin Weiser

Solvers for partial differential equations (PDEs) are one of the cornerstones of computational science. For large problems, they involve huge amounts of data that need to be stored and transmitted on all levels of the memory hierarchy. Often, bandwidth is the limiting factor due to the relatively small arithmetic intensity, and increasingly due to the growing disparity between computing power and bandwidth. Consequently, data compression techniques have been investigated and tailored towards the specific requirements of PDE solvers over the recent decades. This paper surveys data compression challenges and discusses examples of corresponding solution approaches for PDE problems, covering all levels of the memory hierarchy from mass storage up to the main memory. We illustrate concepts for particular methods, with examples, and give references to alternatives.


Sign in / Sign up

Export Citation Format

Share Document