A PREDICTABLE MULTI-THREADED MAIN-MEMORY STORAGE MANAGER

2001 ◽  
Vol 2 (4) ◽  
pp. 416
Author(s):  
Guang-hua SONG
Keyword(s):  
2020 ◽  
Vol 10 (3) ◽  
pp. 999
Author(s):  
Hyokyung Bahn ◽  
Kyungwoon Cho

Recently, non-volatile memory (NVM) has advanced as a fast storage medium, and legacy memory subsystems optimized for DRAM (dynamic random access memory) and HDD (hard disk drive) hierarchies need to be revisited. In this article, we explore the memory subsystems that use NVM as an underlying storage device and discuss the challenges and implications of such systems. As storage performance becomes close to DRAM performance, existing memory configurations and I/O (input/output) mechanisms should be reassessed. This article explores the performance of systems with NVM based storage emulated by the RAMDisk under various configurations. Through our measurement study, we make the following findings. (1) We can decrease the main memory size without performance penalties when NVM storage is adopted instead of HDD. (2) For buffer caching to be effective, judicious management techniques like admission control are necessary. (3) Prefetching is not effective in NVM storage. (4) The effect of synchronous I/O and direct I/O in NVM storage is less significant than that in HDD storage. (5) Performance degradation due to the contention of multi-threads is less severe in NVM based storage than in HDD. Based on these observations, we discuss a new PC configuration consisting of small memory and fast storage in comparison with a traditional PC consisting of large memory and slow storage. We show that this new memory-storage configuration can be an alternative solution for ever-growing memory demands and the limited density of DRAM memory. We anticipate that our results will provide directions in system software development in the presence of ever-faster storage devices.


2002 ◽  
Vol 2 (1) ◽  
pp. 36-47 ◽  
Author(s):  
Philip L. Bohannon ◽  
Rajeev R. Rastogi ◽  
Avi Silberschatz ◽  
S. Sudarshan
Keyword(s):  

2007 ◽  
Vol 3 (1) ◽  
pp. 19-23
Author(s):  
Seok-Jae Lee ◽  
Jong-Hyun Yoon ◽  
Seok-Il Song ◽  
Jae-Soo Yoo

Author(s):  
Philip Bohannon ◽  
Daniel Lieuwen ◽  
Rajeev Rastogi ◽  
Avi Silberschatz ◽  
S. Seshadri ◽  
...  
Keyword(s):  

Author(s):  
P. A. Deshmukh

The idea of using Main Memory Database (MMDB) as physical memory is not new but is in existence quite since a decade. MMDB have evolved from a period when they were only used for caching or in high-speed data systems to a time now in twenty first century when they form a established part of the mainstream IT. Early in this century, although larger main memories were affordable but processors were not fast enough for main memory databases to be admired. However, today’s processors are faster, available in multicore and multiprocessor configurations having 64-bit memory addressability stocked with multiple gigabytes of main memory. Thus, MMDBs definitely call for a solution for meeting the requirements of next generation IT challenges. To aid this swing, database systems are reconsidered to handle implementation issues adjoining the inherent differences between disk and memory storage and gain performance benefits. This paper is a review on Main Memory Databases (MMDB).


2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
Meilian Xu ◽  
Parimala Thulasiraman

Algebraic reconstruction techniques require about half the number of projections as that of Fourier backprojection methods, which makes these methods safer in terms of required radiation dose. Algebraic reconstruction technique (ART) and its variant OS-SART (ordered subset simultaneous ART) are techniques that provide faster convergence with comparatively good image quality. However, the prohibitively long processing time of these techniques prevents their adoption in commercial CT machines. Parallel computing is one solution to this problem. With the advent of heterogeneous multicore architectures that exploit data parallel applications, medical imaging algorithms such as OS-SART can be studied to produce increased performance. In this paper, we map OS-SART on cell broadband engine (Cell BE). We effectively use the architectural features of Cell BE to provide an efficient mapping. The Cell BE consists of one powerPC processor element (PPE) and eight SIMD coprocessors known as synergetic processor elements (SPEs). The limited memory storage on each of the SPEs makes the mapping challenging. Therefore, we present optimization techniques to efficiently map the algorithm on the Cell BE for improved performance over CPU version. We compare the performance of our proposed algorithm on Cell BE to that of Sun Fire×4600, a shared memory machine. The Cell BE is five times faster than AMD Opteron dual-core processor. The speedup of the algorithm on Cell BE increases with the increase in the number of SPEs. We also experiment with various parameters, such as number of subsets, number of processing elements, and number of DMA transfers between main memory and local memory, that impact the performance of the algorithm.


Sign in / Sign up

Export Citation Format

Share Document