An energy- and performance-aware DRAM cache architecture for hybrid DRAM/PCM main memory systems

Author(s):  
Hyung Gyu Lee ◽  
Seungcheol Baek ◽  
Chrysostomos Nicopoulos ◽  
Jongman Kim
2020 ◽  
Vol 20 (3) ◽  
pp. 211-222
Author(s):  
Nikolaus Glombiewski ◽  
Philipp Götze ◽  
Michael Körber ◽  
Andreas Morgen ◽  
Bernhard Seeger

Abstract Event stores face the difficult challenge of continuously ingesting massive temporal data streams while satisfying demanding query and recovery requirements. Many of today’s systems deal with multiple hardware-based trade-offs. For instance, long-term storage solutions balance keeping data in cheap secondary media (SSDs, HDDs) and performance-oriented main-memory caches. As an alternative, in-memory systems focus on performance, while sacrificing monetary costs, and, to some degree, recovery guarantees. The advent of persistent memory (PMem) led to a multitude of novel research proposals aiming to alleviate those trade-offs in various fields. So far, however, there is no proposal for a PMem-powered specialized event store. Based on ChronicleDB, we will present several complementary approaches for a three-layer architecture featuring main memory, PMem, and secondary storage. We enhance some of ChronicleDB’s components with PMem for better insertion and query performance as well as better recovery guarantees. At the same time, the three-layer architecture aims to keep the overall dollar cost of a system low. The limitations and opportunities of a PMem-enhanced event store serve as important groundwork for comprehensive system design exploiting a modern storage hierarchy.


2022 ◽  
Vol 21 (1) ◽  
pp. 1-22
Author(s):  
Dongsuk Shin ◽  
Hakbeom Jang ◽  
Kiseok Oh ◽  
Jae W. Lee

A long battery life is a first-class design objective for mobile devices, and main memory accounts for a major portion of total energy consumption. Moreover, the energy consumption from memory is expected to increase further with ever-growing demands for bandwidth and capacity. A hybrid memory system with both DRAM and PCM can be an attractive solution to provide additional capacity and reduce standby energy. Although providing much greater density than DRAM, PCM has longer access latency and limited write endurance to make it challenging to architect it for main memory. To address this challenge, this article introduces CAMP, a novel DRAM c ache a rchitecture for m obile platforms with P CM-based main memory. A DRAM cache in this environment is required to filter most of the writes to PCM to increase its lifetime, and deliver highest efficiency even for a relatively small-sized DRAM cache that mobile platforms can afford. To address this CAMP divides DRAM space into two regions: a page cache for exploiting spatial locality in a bandwidth-efficient manner and a dirty block buffer for maximally filtering writes. CAMP improves the performance and energy-delay-product by 29.2% and 45.2%, respectively, over the baseline PCM-oblivious DRAM cache, while increasing PCM lifetime by 2.7×. And CAMP also improves the performance and energy-delay-product by 29.3% and 41.5%, respectively, over the state-of-the-art design with dirty block buffer, while increasing PCM lifetime by 2.5×.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2158
Author(s):  
Jeong-Geun Kim ◽  
Shin-Dug Kim ◽  
Su-Kyung Yoon

This research is to design a Q-selector-based prefetching method for a dynamic random-access memory (DRAM)/ Phase-change memory (PCM)hybrid main memory system for memory-intensive big data applications generating irregular memory accessing streams. Specifically, the proposed method fully exploits the advantages of two-level hybrid memory systems, constructed as DRAM devices and non-volatile memory (NVM) devices. The Q-selector-based prefetching method is based on the Q-learning method, one of the reinforcement learning algorithms, which determines a near-optimal prefetcher for an application’s current running phase. For this, our model analyzes real-time performance status to set the criteria for the Q-learning method. We evaluate the Q-selector-based prefetching method with workloads from data mining and data-intensive benchmark applications, PARSEC-3.0 and graphBIG. Our evaluation results show that the system achieves approximately 31% performance improvement and increases the hit ratio of the DRAM-cache layer by 46% on average compared to a PCM-only main memory system. In addition, it achieves better performance results compared to the state-of-the-art prefetcher, access map pattern matching (AMPM) prefetcher, by 14.3% reduction of execution time and 12.89% of better CPI enhancement.


2016 ◽  
Vol 12 (1) ◽  
pp. 44-66 ◽  
Author(s):  
Yide Shen ◽  
Michael J. Gallivan ◽  
Xinlin Tang

With distributed teams becoming increasingly common in organizations, improving their performance is a critical challenge for both practitioners and researchers. This research examines how group members' perception of subgroup formation affects team performance in fully distributed teams. The authors propose that individual members' perception about the presence of subgroups within the team has a negative effect on team performance, which manifests itself through decreases in a team's transactive memory system (TMS). Using data from 154 members of 41 fully distributed teams (where no group members were colocated), the authors found that members' perceptions of the existence of subgroups impair the team's TMS and its overall performance. They found these effects to be statistically significant. In addition, decreases in a group's TMS partially mediate the effect of perceived subgroup formation on team performance. The authors discuss the implications of their findings for managerial action, as well as for researchers, and they propose directions for future research.


Author(s):  
Dominik Strzałka

<p>The problem of modeling different parts of computer systems requires accurate statistical tools. Cache memory systems is an inherent part of nowadays computer systems, where the memory hierarchical structure plays a key point role in behavior and performance of the whole system. In the case of Windows operating systems, cache memory is a place in memory subsystem where the I/O system puts recently used data from disk. In paper some preliminary results about statistical behavior of one selected system counter behavior are presented. Obtained results shown that the real phenomena, which have appeared during human-computer interaction, can be expressed in terms of non-extensive statistics that is related to Tsallis proposal of new entropy definition.</p>


Sign in / Sign up

Export Citation Format

Share Document