A Data Structure on Interval Graphs and Its Applications

1997 ◽  
Vol 07 (03) ◽  
pp. 165-175 ◽  
Author(s):  
Madhumangal Pal ◽  
G. P. Bhattacharjee

In this papar, a new data structure, interval tree (IT), is introduced for an interval graph. Some important properties of IT are studies from the algorithmic point of view. It has many advantages compared to the data structures which are commonly used to solve the problems on interval graphs. Using the properties of IT, the following problems are solved on interval graphs: (i) shortest distances between any two vertices, and (ii) the diameter of the graph.

2011 ◽  
Vol 21 (04) ◽  
pp. 467-494
Author(s):  
THIAGO R. DOS SANTOS ◽  
HANS-PETER MEINZER ◽  
LENA MAIER-HEIN

The Doubly Linked Face List (DLFL) is a data structure for mesh representation that always ensures topological 2-manifold consistency. Furthermore, it uses a minimal amount of computer memory and allows queries to be performed very efficiently. However, the use of the DLFL for the implementation of practical applications is very limited, mainly because of two drawbacks: (1) the DLFL is only able to represent 2-manifold objects; (2) its operators may be ambiguous, modifying the structure in an unexpected way from the user's point of view. In order to overcome these drawbacks, we present the Extended Doubly Linked Face List (XDLFL), which extends the DLFL for the representation of 2-pseudomanifolds and 2-manifolds with boundaries, increasing its applicability for practical software applications. Using these extensions, we also show how to avoid ambiguities in the original DLFL's operators. A new set of intuitive operators for the manipulation of the extensions and for the unambiguous manipulation of the data structure is also presented. The implementation of these extensions is straightforward, since the modifications to the DLFL are trivial and based on behavioral observations of the DLFL's operators. After integrating the extensions to the DLFL, memory usage increases very slightly, while is still smaller than the memory usage of other well-known data structures. Furthermore, queries related to the new extensions, such as whether an edge belongs to a boundary, may be performed very efficiently. The proposed extensions and their operators are very beneficial for applications such as surgery simulation softwares, where the interactions with the models, such as cutting or appending objects to each other, must be performed in an efficient and transparent manner.


2021 ◽  
Vol 12 (5) ◽  
Author(s):  
Josué Ttito ◽  
Renato Marroquín ◽  
Sergio Lifschitz ◽  
Lewis McGibbney ◽  
José Talavera

Key-value stores propose a straightforward yet powerful data model. Data is modeled using key-value pairs where values can be arbitrary objects and written/read using the key associated with it. In addition to their simple interface, such data stores also provide read operations such as full and range scans. However, due to the simplicity of its interface, trying to optimize data accesses becomes challenging. This work aims to enable the shared execution of concurrent range and point queries on key-value stores. Thus, reducing the overall data movement when executing a complete workload. To accomplish this, we analyze different possible data structures and propose our variation of a segment tree, Updatable Interval Tree. Our data structure helps us co-planning and co-executing multiple range queries together and reduces redundant work. This results in executing workloads more efficiently and overall increased throughput, as we show in our evaluation.


Author(s):  
M. Goudarzi ◽  
M. Asghari ◽  
P. Boguslawski ◽  
A. A. Rahman

In GIS, different types of data structures have been proposed in order to represent 3D models and examining the relationship between spatial objects. The Dual Half-Edge (DHE) is a data structure that permits the simultaneous representation of the geometry and topology of models with a special focus on building interiors. In this paper, from the storage cost point of view, the G-Maps model is analyzed and compared with the DHE model, since they have some features in common and also G-Maps is used widely in GIS. The primary result shows that the DHE is more efficient than the G-Maps with regard to the storage cost.


2020 ◽  
Author(s):  
Josue Joel Ttito ◽  
Renato Marroquin ◽  
Sergio Lifschitz

Key-value stores propose a very simple yet powerful data model. Data is modeled using key-value pairs where values can be arbitrary objects and can be written/read using the key associated with it. In addition to their simple interface, such data stores also provide read operations such as full and range scans. However, due to the simplicity of its interface, trying to optimize data accesses becomes challenging. This work aims to enable the shared execution of concurrent range and point queries on key-value stores. Thus, reducing the overall data movement when executing a complete workload. To accomplish this, we analyze different possible data structures and propose our variation of a segment tree, Updatable Interval Tree. This data structure helps us co-planning and co-executing multiple range queries together, as we show in our evaluation.


2011 ◽  
Vol 2011 ◽  
pp. 1-14 ◽  
Author(s):  
Tarasankar Pramanik ◽  
Sukumar Mondal ◽  
Madhumangal Pal

The k-tuple domination problem, for a fixed positive integer k, is to find a minimum size vertex subset such that every vertex in the graph is dominated by at least k vertices in this set. The case when k=2 is called 2-tuple domination problem or double domination problem. In this paper, the 2-tuple domination problem is studied on interval graphs from an algorithmic point of view, which takes O(n2) time, n is the total number of vertices of the interval graph.


2021 ◽  
Vol 13 (4) ◽  
pp. 559
Author(s):  
Milto Miltiadou ◽  
Neill D. F. Campbell ◽  
Darren Cosker ◽  
Michael G. Grant

In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used approach of interpreting them. The increased information stored within the waveform data makes interpretation and handling difficult. It is, therefore, important to research which data structures are more appropriate for storing and interpreting the data. In this paper, we investigate the performance of six data structures while voxelising and interpreting full-waveform LiDAR data for 3D polygonal model creation. The data structures are tested in terms of time efficiency and memory consumption during run-time and are the following: (1) 1D-Array that guarantees coherent memory allocation, (2) Voxel Hashing, which uses a hash table for storing the intensity values (3) Octree (4) Integral Volumes that allows finding the sum of any cuboid area in constant time, (5) Octree Max/Min, which is an upgraded octree and (6) Integral Octree, which is proposed here and it is an attempt to combine the benefits of octrees and Integral Volumes. In this paper, it is shown that Integral Volumes is the more time efficient data structure but it requires the most memory allocation. Furthermore, 1D-Array and Integral Volumes require the allocation of coherent space in memory including the empty voxels, while Voxel Hashing and the octree related data structures do not require to allocate memory for empty voxels. These data structures, therefore, and as shown in the test conducted, allocate less memory. To sum up, there is a need to investigate how the LiDAR data are stored in memory. Each tested data structure has different benefits and downsides; therefore, each application should be examined individually.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 470-483 ◽  
Author(s):  
GREGORY J. DUCK ◽  
JOXAN JAFFAR ◽  
ROLAND H. C. YAP

AbstractMalformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented using a Constraint Handling Rules (CHR) extension of built-in heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e., the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in a wide range of data-structure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation that uses the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.


Author(s):  
Sudeep Sarkar ◽  
Dmitry Goldgof

There is a growing need for expertise both in image analysis and in software engineering. To date, these two areas have been taught separately in an undergraduate computer and information science curriculum. However, we have found that introduction to image analysis can be easily integrated in data-structure courses without detracting from the original goal of teaching data structures. Some of the image processing tasks offer a natural way to introduce basic data structures such as arrays, queues, stacks, trees and hash tables. Not only does this integrated strategy expose the students to image related manipulations at an early stage of the curriculum but it also imparts cohesiveness to the data-structure assignments and brings them closer to real life. In this paper we present a set of programming assignments that integrates undergraduate data-structure education with image processing tasks. These assignments can be incorporated in existing data-structure courses with low time and software overheads. We have used these assignment sets thrice: once in a 10-week duration data-structure course at the University of California, Santa Barbara and the other two times in 15-week duration courses at the University of South Florida, Tampa.


Algorithms ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 128 ◽  
Author(s):  
Shuhei Denzumi ◽  
Jun Kawahara ◽  
Koji Tsuda ◽  
Hiroki Arimura ◽  
Shin-ichi Minato ◽  
...  

In this article, we propose a succinct data structure of zero-suppressed binary decision diagrams (ZDDs). A ZDD represents sets of combinations efficiently and we can perform various set operations on the ZDD without explicitly extracting combinations. Thanks to these features, ZDDs have been applied to web information retrieval, information integration, and data mining. However, to support rich manipulation of sets of combinations and update ZDDs in the future, ZDDs need too much space, which means that there is still room to be compressed. The paper introduces a new succinct data structure, called DenseZDD, for further compressing a ZDD when we do not need to conduct set operations on the ZDD but want to examine whether a given set is included in the family represented by the ZDD, and count the number of elements in the family. We also propose a hybrid method, which combines DenseZDDs with ordinary ZDDs. By numerical experiments, we show that the sizes of our data structures are three times smaller than those of ordinary ZDDs, and membership operations and random sampling on DenseZDDs are about ten times and three times faster than those on ordinary ZDDs for some datasets, respectively.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Inanç Birol ◽  
Justin Chu ◽  
Hamid Mohamadi ◽  
Shaun D. Jackman ◽  
Karthika Raghavan ◽  
...  

De novoassembly of the genome of a species is essential in the absence of a reference genome sequence. Many scalable assembly algorithms use the de Bruijn graph (DBG) paradigm to reconstruct genomes, where a table of subsequences of a certain length is derived from the reads, and their overlaps are analyzed to assemble sequences. Despite longer subsequences unlocking longer genomic features for assembly, associated increase in compute resources limits the practicability of DBG over other assembly archetypes already designed for longer reads. Here, we revisit the DBG paradigm to adapt it to the changing sequencing technology landscape and introduce three data structure designs for spaced seeds in the form of paired subsequences. These data structures address memory and run time constraints imposed by longer reads. We observe that when a fixed distance separates seed pairs, it provides increased sequence specificity with increased gap length. Further, we note that Bloom filters would be suitable to implicitly store spaced seeds and be tolerant to sequencing errors. Building on this concept, we describe a data structure for tracking the frequencies of observed spaced seeds. These data structure designs will have applications in genome, transcriptome and metagenome assemblies, and read error correction.


Sign in / Sign up

Export Citation Format

Share Document