scholarly journals Application-Oriented Data Migration to Accelerate In-Memory Database on Hybrid Memory

Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 52
Author(s):  
Wenze Zhao ◽  
Yajuan Du ◽  
Mingzhe Zhang ◽  
Mingyang Liu ◽  
Kailun Jin ◽  
...  

With the advantage of faster data access than traditional disks, in-memory database systems, such as Redis and Memcached, have been widely applied in data centers and embedded systems. The performance of in-memory database greatly depends on the access speed of memory. With the requirement of high bandwidth and low energy, die-stacked memory (e.g., High Bandwidth Memory (HBM)) has been developed to extend the channel number and width. However, the capacity of die-stacked memory is limited due to the interposer challenge. Thus, hybrid memory system with traditional Dynamic Random Access Memory (DRAM) and die-stacked memory emerges. Existing works have proposed to place and manage data on hybrid memory architecture in the view of hardware. This paper considers to manage in-memory database data in hybrid memory in the view of application. We first perform a preliminary study on the hotness distribution of client requests on Redis. From the results, we observe that most requests happen on a small portion of data objects in in-memory database. Then, we propose the Application-oriented Data Migration called ADM to accelerate in-memory database on hybrid memory. We design a hotness management method and two migration policies to migrate data into or out of HBM. We take Redis under comprehensive benchmarks as a case study for the proposed method. Through the experimental results, it is verified that our proposed method can effectively gain performance improvement and reduce energy consumption compared with existing Redis database.

Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2061
Author(s):  
Seung-Ho Lim ◽  
Hyunchul Seok ◽  
Ki-Woong Park

The key challenges of manycore systems are the large amount of memory and high bandwidth required to run many applications. Three-dimesnional integrated on-chip memory is a promising candidate for addressing these challenges. The advent of on-chip memory has provided new opportunities to rethink traditional memory hierarchies and their management. In this study, we propose a polymorphic memory as a hybrid approach when using on-chip memory. In contrast to previous studies, we use the on-chip memory as both a main memory (called M1 memory) and a Dynamic Random Access Memory (DRAM) cache (called M2 cache). The main memory consists of M1 memory and a conventional DRAM memory called M2 memory. To achieve high performance when running many applications on this memory architecture, we propose management techniques for the main memory with M1 and M2 memories and for polymorphic memory with dynamic memory allocations for many applications in a manycore system. The first technique is to move frequently accessed pages to M1 memory via hardware monitoring in a memory controller. The second is M1 memory partitioning to mitigate contention problems among many processes. Finally, we propose a method to use M2 cache between a conventional last-level cache and M2 memory, and we determine the best cache size for improving the performance with polymorphic memory. The proposed schemes are evaluated with the SPEC CPU2006 benchmark, and the experimental results show that the proposed approaches can improve the performance under various workloads of the benchmark. The performance evaluation confirms that the average performance improvement of polymorphic memory is 21.7%, with 0.026 standard deviation for the normalized results, compared to the previous method of using on-chip memory as a last-level cache.


2022 ◽  
Vol 21 (1) ◽  
pp. 1-18
Author(s):  
Fei Wen ◽  
Mian Qin ◽  
Paul Gratz ◽  
Narasimha Reddy

Hybrid memory systems, comprised of emerging non-volatile memory (NVM) and DRAM, have been proposed to address the growing memory demand of current mobile applications. Recently emerging NVM technologies, such as phase-change memories (PCM), memristor, and 3D XPoint, have higher capacity density, minimal static power consumption and lower cost per GB. However, NVM has longer access latency and limited write endurance as opposed to DRAM. The different characteristics of distinct memory classes render a new challenge for memory system design. Ideally, pages should be placed or migrated between the two types of memories according to the data objects’ access properties. Prior system software approaches exploit the program information from OS but at the cost of high software latency incurred by related kernel processes. Hardware approaches can avoid these latencies, however, hardware’s vision is constrained to a short time window of recent memory requests, due to the limited on-chip resources. In this work, we propose OpenMem: a hardware-software cooperative approach that combines the execution time advantages of pure hardware approaches with the data object properties in a global scope. First, we built a hardware-based memory manager unit (HMMU) that can learn the short-term access patterns by online profiling, and execute data migration efficiently. Then, we built a heap memory manager for the heterogeneous memory systems that allows the programmer to directly customize each data object’s allocation to a favorable memory device within the presumed object life cycle. With the programmer’s hints guiding the data placement at allocation time, data objects with similar properties will be congregated to reduce unnecessary page migrations. We implemented the whole system on the FPGA board with embedded ARM processors. In testing under a set of benchmark applications from SPEC 2017 and PARSEC, experimental results show that OpenMem reduces 44.6% energy consumption with only a 16% performance degradation compared to the all-DRAM memory system. The amount of writes to the NVM is reduced by 14% versus the HMMU-only, extending the NVM device lifetime.


Author(s):  
Felix Beaudoin ◽  
Stephen Lucarini ◽  
Fred Towler ◽  
Stephen Wu ◽  
Zhigang Song ◽  
...  

Abstract For SRAMs with high logic complexity, hard defects, design debug, and soft defects have to be tackled all at once early on in the technology development while innovative integration schemes in front-end of the line are being validated. This paper presents a case study of a high-complexity static random access memory (SRAM) used during a 32nm technology development phase. The case study addresses several novel and unrelated fail mechanisms on a product-like SRAM. Corrective actions were put in place for several process levels in the back-end of the line, the middle of the line, and the front-end of the line. These process changes were successfully verified by demonstrating a significant reduction of the Vmax and Vmin nest array block fallout, thus allowing the broader development team to continue improving random defectivity.


2021 ◽  
Author(s):  
Ananth Kumar Tamilarasan ◽  
Darwin Sundarapandi Edward ◽  
Arun Samuel Thankamony Sarasam

Abstract A novel approach called Keeper in LEakage Control Transistor (KLECTOR) is presented in this paper to reduce leakage currents in SRAM architecture. The SRAM is significantly affected by the leakage current during the "standby mode", which is caused by the fabric which has a lower threshold voltage. KLECTOR circuit employs less power consumption by restricting the flow of current through devices of less voltage drops and relies heavily on the self-controlled transistor at the output node. It has been found from the presented results that static (leakage) power in the write operation is reduced to 63% and 69 % for the read operation. This proposed approach is designed and simulated using the Virtuoso, Cadence EDA tool.


2020 ◽  
Vol 1 ◽  
pp. 1-23
Author(s):  
Majid Hojati ◽  
Colin Robertson

Abstract. With new forms of digital spatial data driving new applications for monitoring and understanding environmental change, there are growing demands on traditional GIS tools for spatial data storage, management and processing. Discrete Global Grid System (DGGS) are methods to tessellate globe into multiresolution grids, which represent a global spatial fabric capable of storing heterogeneous spatial data, and improved performance in data access, retrieval, and analysis. While DGGS-based GIS may hold potential for next-generation big data GIS platforms, few of studies have tried to implement them as a framework for operational spatial analysis. Cellular Automata (CA) is a classic dynamic modeling framework which has been used with traditional raster data model for various environmental modeling such as wildfire modeling, urban expansion modeling and so on. The main objectives of this paper are to (i) investigate the possibility of using DGGS for running dynamic spatial analysis, (ii) evaluate CA as a generic data model for dynamic phenomena modeling within a DGGS data model and (iii) evaluate an in-database approach for CA modelling. To do so, a case study into wildfire spread modelling is developed. Results demonstrate that using a DGGS data model not only provides the ability to integrate different data sources, but also provides a framework to do spatial analysis without using geometry-based analysis. This results in a simplified architecture and common spatial fabric to support development of a wide array of spatial algorithms. While considerable work remains to be done, CA modelling within a DGGS-based GIS is a robust and flexible modelling framework for big-data GIS analysis in an environmental monitoring context.


Sign in / Sign up

Export Citation Format

Share Document