FAST AND EFFICIENT OPERATIONS ON PARALLEL PRIORITY QUEUES

1996 ◽  
Vol 06 (04) ◽  
pp. 451-467
Author(s):  
DANNY Z. CHEN ◽  
XIAOBO SHARON HU

The Parallel Priority Queue (PPQ) data structure supports parallel operations for manipulating data items with keys, such as inserting n new items, deleting n items with the first n smallest keys, creating a new PPQ that contains a set of items, and melding two PPQ’s into one. In this paper, we present fast and efficient parallel algorithms for performing operations on the PPQ’s that maintain data items with real-valued keys. The data structures that we use for implementing the PPQ’s are the unmeldable and meldable parallel heaps. Our algorithms have considerably less time and/or work bounds than the previously best known algorithms, and use a less powerful parallel computational model (EREW PRAM). The new ideas that make our improvement possible are two partition schemes dynamically maintained on the parallel heap structures: the minimal- path partition and the right-path partition. These partition schemes could be of interest in their own right. Our results also lead to optimal parallel algorithms for implementing sequential operations on several commonly-used heap structures.

1999 ◽  
Vol 10 (01) ◽  
pp. 1-17 ◽  
Author(s):  
SEONGHUN CHO ◽  
SARTAJ SAHNI

We show that the leftist tree data structure may be adapted to obtain data structures that permit the double-ended priority queue operations Insert, DeleteMin, DeleteMax, and Merge to be done in O( log n) time where n is the size of the resulting queue. The operations FindMin and FindMax can be done in O(1) time. Experimental results are also presented.


1996 ◽  
Vol 06 (02) ◽  
pp. 213-222 ◽  
Author(s):  
PAOLO FERRAGINA ◽  
FABRIZIO LUCCIO

In this paper we provide three simple techniques to maintain in parallel the minimum spanning tree of an undirected graph under single or batch of edge updates (i.e., insertions and deletions). Our results extend the use of the sparsification data structure to the EREW PRAM model. For proper values of the batch size, our algorithms require less time and work than the best known dynamic parallel algorithms.


1995 ◽  
Vol 05 (02) ◽  
pp. 299-309
Author(s):  
ROLF NIEDERMEIER ◽  
PETER ROSSMANITH

We investigate parallel algorithms to compute recursively defined functions. Our computational model are parallel random access machines (PRAM's). We preferably make use of the OROW-PRAM (owner read, owner write), a model supposed to be even weaker and more realistic than the EREW-PRAM (exclusive read, exclusive write) and that still provides the opportunities of a completely connected processor network. For OROW-PRAM's we show that our parallel algorithms are work-optimal.


2004 ◽  
Vol Vol. 6 no. 2 ◽  
Author(s):  
Hon-Chan Chen

International audience Let G be a graph. A component of G is a maximal connected subgraph in G. A vertex v is a cut vertex of G if k(G-v) > k(G), where k(G) is the number of components in G. Similarly, an edge e is a bridge of G if k(G-e) > k(G). In this paper, we will propose new O(n) algorithms for finding cut vertices and bridges of a trapezoid graph, assuming the trapezoid diagram is given. Our algorithms can be easily parallelized on the EREW PRAM computational model so that cut vertices and bridges can be found in O(log n) time by using O(n / log n) processors.


1999 ◽  
Vol 9 (1) ◽  
pp. 93-104 ◽  
Author(s):  
RALF HINZE

Functional programming languages are an excellent tool for teaching algorithms and data structures. This paper explains binomial heaps, a beautiful data structure for priority queues, using the functional programming language Haskell (Peterson and Hammond, 1997). We largely follow a deductive approach: using the metaphor of a tennis tournament we show that binomial heaps arise naturally through a number of logical steps. Haskell supports the deductive style of presentation very well: new types are introduced at ease, algorithms can be expressed clearly and succinctly, and Haskell's type classes allow to capture common algorithmic patterns. The paper aims at the level of an undergraduate student who has experience in reading and writing Haskell programs, and who is familiar with the concept of a priority queue.


1994 ◽  
Vol 1 (16) ◽  
Author(s):  
Lars Arge

In this paper we develop a technique for transforming an internal memory datastructure into an external storage data structure suitable for plane-sweep algorithms. We use this technique to develop external storage versions of the range tree and the segment tree. We also obtain an external priority queue. Using the first two structures, we solve the orthogonal segment intersection, the isothetic rectangle intersection, and the batched range searching problem in the optimal number of I/O-operations. Unlike previously known I/O-algorithms the developed algorithms are straightforward generalizations of the ordinary internal memory plane-sweep algorithms. Previously almost no dynamic data structures were known for the model we are working in.


2013 ◽  
Vol 23 (04n05) ◽  
pp. 233-251 ◽  
Author(s):  
PEYMAN AFSHANI

We investigate one of the fundamental areas in computational geometry: lower bounds for range reporting problems in the pointer machine and the external memory models. We develop new techniques that lead to new and improved lower bounds for simplex range reporting as well as some other geometric problems. Simplex range reporting is the problem of storing n points in the d-dimensional space in a data structure such that the k points that lie inside a query simplex can be found efficiently. This is one of the fundamental and extensively studied problems in computational geometry. Currently, the best data structures for the problem achieve Q(n) + O(k) query time using [Formula: see text] space in which the [Formula: see text] notation either hides a polylogarithmic or an nε factor for any constant ε > 0, (depending on the data structure and Q(n)). The best lower bound on this problem is due to Chazelle and Rosenberg who showed any pointer machine data structure that can answer queries in O(nγ + k) time must use Ω(nd-ε-dγ) space. Observe that this bound is a polynomial factor away from the best known data structures. In this article, we improve the space lower bound to [Formula: see text]. Not only this bridges the gap from polynomial to sub-polynomial, it also offers a smooth trade-off curve. For instance, for polylogarithmic values of Q(n), our space lower bound almost equals Ω((n/Q(n))d); the latter is generally believed to be the “right” bound. By a simple geometric transformation, we also improve the best lower bounds for the halfspace range reporting problem. Furthermore, we study the external memory model and offer a new simple framework for proving lower bounds in this model. We show that answering simplex range reporting queries with Q(n)+O(k/B) I/Os requires [Formula: see text]) space or [Formula: see text] blocks, in which B is the block size.


2021 ◽  
Vol 49 (5) ◽  
pp. 030006052110196
Author(s):  
Xiaotong Peng ◽  
Zhi Duan ◽  
Hongling Yin ◽  
Furong Dai ◽  
Huining Liu

Epithelioid angiosarcoma is a rare and highly aggressive soft tissue angiosarcoma most commonly arising in the deep soft tissues. Given that abundant vascular cavities anastomose with each other, most angiosarcomas prone to metastasis recur quickly, and the overall prognosis is poor. We report a 25-year-old woman at 24 weeks’ gestation who presented with a 1-month history of abdominal distension. Ultrasonography suggested a mass in the right adnexa, and she underwent two operations owing to uncontrolled intraperitoneal bleeding with progressive anemia. The right ovarian tumor and right adnexa were removed successively. Biopsy yielded a diagnosis of primary epithelioid angiosarcoma with mature cystic teratoma. The patient died from uncontrolled progressive bleeding 1 week after the second operation. This case revealed that epithelial angiosarcoma is a highly malignant endothelial cell tumor. The results of surgery and chemoradiotherapy tend to be poor, and the recurrence rate is high. The purpose of this study is to raise clinical awareness of epithelial angiosarcoma and its adverse events and to provide new ideas for the treatment of these adverse events. Immunohistochemical staining of pathological specimens can facilitate diagnosis. Pregnancy with malignant tumors may lead to rapid disease progression, extensive lesions, and a poor prognosis.


2021 ◽  
Vol 13 (4) ◽  
pp. 559
Author(s):  
Milto Miltiadou ◽  
Neill D. F. Campbell ◽  
Darren Cosker ◽  
Michael G. Grant

In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used approach of interpreting them. The increased information stored within the waveform data makes interpretation and handling difficult. It is, therefore, important to research which data structures are more appropriate for storing and interpreting the data. In this paper, we investigate the performance of six data structures while voxelising and interpreting full-waveform LiDAR data for 3D polygonal model creation. The data structures are tested in terms of time efficiency and memory consumption during run-time and are the following: (1) 1D-Array that guarantees coherent memory allocation, (2) Voxel Hashing, which uses a hash table for storing the intensity values (3) Octree (4) Integral Volumes that allows finding the sum of any cuboid area in constant time, (5) Octree Max/Min, which is an upgraded octree and (6) Integral Octree, which is proposed here and it is an attempt to combine the benefits of octrees and Integral Volumes. In this paper, it is shown that Integral Volumes is the more time efficient data structure but it requires the most memory allocation. Furthermore, 1D-Array and Integral Volumes require the allocation of coherent space in memory including the empty voxels, while Voxel Hashing and the octree related data structures do not require to allocate memory for empty voxels. These data structures, therefore, and as shown in the test conducted, allocate less memory. To sum up, there is a need to investigate how the LiDAR data are stored in memory. Each tested data structure has different benefits and downsides; therefore, each application should be examined individually.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 470-483 ◽  
Author(s):  
GREGORY J. DUCK ◽  
JOXAN JAFFAR ◽  
ROLAND H. C. YAP

AbstractMalformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented using a Constraint Handling Rules (CHR) extension of built-in heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e., the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in a wide range of data-structure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation that uses the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.


Sign in / Sign up

Export Citation Format

Share Document