Prefetching J+-Tree: A Cache-Optimized Main Memory Database Index Structure

2009 ◽  
Vol 24 (4) ◽  
pp. 687-707 ◽  
Author(s):  
Hua Luan ◽  
Xiao-Yong Du ◽  
Shan Wang
2013 ◽  
Vol 427-429 ◽  
pp. 2531-2535 ◽  
Author(s):  
Feng Dong Sun ◽  
Quan Guo ◽  
Lan Wang

The bottleneck is not the disk I/O but CUP clock speed faster than the memory speed in main memory database .In order to achieve high performance in main memory database ,it is a good approach to design new index structures to improve the memory access speed .This chapter presents a T-tree index structure and its algorithms in main memory database firstly .Then presents two results on Optimization of T-tree index ,including T-tail tree and TTB-tree. Our results indicate that the T-Tree provides good overall performance in main memory.


2010 ◽  
Vol 40-41 ◽  
pp. 206-211
Author(s):  
Zhi Lin Zhu

One approach to achieving high performance in the DBMS in the critical application is to store the database in main memory rather than on disk. One can then design new data structures and algorithms oriented towards increasing the efficiency of the main memory database -MMDB. In this paper we present some results on index structures from an ongoing study of MMDB. We propose a new index structure, the T-tail Tree. We give the main algorithm of the T-tail Tree and the performance of these algorithms. Our results indicate that T-tail Tree provides good overall performance in main memory.


Author(s):  
Huazhuang Yao ◽  
Yongyan Wang ◽  
Shuai Wang ◽  
Kun Li ◽  
Chao Guo

2021 ◽  
Vol 11 (5) ◽  
pp. 2405
Author(s):  
Yuxiang Sun ◽  
Tianyi Zhao ◽  
Seulgi Yoon ◽  
Yongju Lee

Semantic Web has recently gained traction with the use of Linked Open Data (LOD) on the Web. Although numerous state-of-the-art methodologies, standards, and technologies are applicable to the LOD cloud, many issues persist. Because the LOD cloud is based on graph-based resource description framework (RDF) triples and the SPARQL query language, we cannot directly adopt traditional techniques employed for database management systems or distributed computing systems. This paper addresses how the LOD cloud can be efficiently organized, retrieved, and evaluated. We propose a novel hybrid approach that combines the index and live exploration approaches for improved LOD join query performance. Using a two-step index structure combining a disk-based 3D R*-tree with the extended multidimensional histogram and flash memory-based k-d trees, we can efficiently discover interlinked data distributed across multiple resources. Because this method rapidly prunes numerous false hits, the performance of join query processing is remarkably improved. We also propose a hot-cold segment identification algorithm to identify regions of high interest. The proposed method is compared with existing popular methods on real RDF datasets. Results indicate that our method outperforms the existing methods because it can quickly obtain target results by reducing unnecessary data scanning and reduce the amount of main memory required to load filtering results.


Author(s):  
Muhammad Attahir Jibril ◽  
Philipp Götze ◽  
David Broneske ◽  
Kai-Uwe Sattler

AbstractAfter the introduction of Persistent Memory in the form of Intel’s Optane DC Persistent Memory on the market in 2019, it has found its way into manifold applications and systems. As Google and other cloud infrastructure providers are starting to incorporate Persistent Memory into their portfolio, it is only logical that cloud applications have to exploit its inherent properties. Persistent Memory can serve as a DRAM substitute, but guarantees persistence at the cost of compromised read/write performance compared to standard DRAM. These properties particularly affect the performance of index structures, since they are subject to frequent updates and queries. However, adapting each and every index structure to exploit the properties of Persistent Memory is tedious. Hence, we require a general technique that hides this access gap, e.g., by using DRAM caching strategies. To exploit Persistent Memory properties for analytical index structures, we propose selective caching. It is based on a mixture of dynamic and static caching of tree nodes in DRAM to reach near-DRAM access speeds for index structures. In this paper, we evaluate selective caching on the OLAP-optimized main-memory index structure Elf, because its memory layout allows for an easy caching. Our experiments show that if configured well, selective caching with a suitable replacement strategy can keep pace with pure DRAM storage of Elf while guaranteeing persistence. These results are also reflected when selective caching is used for parallel workloads.


Sign in / Sign up

Export Citation Format

Share Document