Worst-case data structures for the priority queue with attrition

1989 ◽  
Vol 31 (2) ◽  
pp. 69-75 ◽  
Author(s):  
Rajamani Sundar
1980 ◽  
Vol 13 (2) ◽  
pp. 155-168 ◽  
Author(s):  
J. L. Bentley ◽  
H. A. Maurer

Author(s):  
Xiuqin Chu ◽  
Na Li ◽  
Jun Wang ◽  
Yuhuan Luo ◽  
Feng Wu ◽  
...  
Keyword(s):  

Electronics ◽  
2018 ◽  
Vol 7 (10) ◽  
pp. 224 ◽  
Author(s):  
Zhensen Tang ◽  
Yao Wang ◽  
Yaqing Chi ◽  
Liang Fang

In this paper, the dependence of sensing currents on various device parameters is comprehensively studied by simulating the complete crossbar array rather than its equivalent analytical model. The worst-case scenario for read operation is strictly analyzed and defined in terms of selected location and data pattern, respectively, based on the effect of parasitic sneak paths and interconnection resistance. It is shown that the worst-case data pattern depends on the trade-off between the shunting effect of the parasitic sneak paths and the current injection effect of the parasitic sneak leakage, thus requiring specific analysis in practical simulations. In dealing with that, we propose a concept of the threshold array size incorporating the trade-off to define the parameter-dependent worst-case data pattern. This figure-of-merit provides guidelines for the worst-case scenario analysis of the crossbar array read operations.


Risk Analysis ◽  
2003 ◽  
Vol 23 (5) ◽  
pp. 865-881 ◽  
Author(s):  
Paul R. Kleindorfer ◽  
James C. Belke ◽  
Michael R. Elliott ◽  
Kiwan Lee ◽  
Robert A. Lowe ◽  
...  

Author(s):  
Pooya Davoodi ◽  
Gonzalo Navarro ◽  
Rajeev Raman ◽  
S. Srinivasa Rao

We consider the problem of encoding range minimum queries (RMQs): given an array A [1.. n ] of distinct totally ordered values, to pre-process A and create a data structure that can answer the query RMQ( i , j ), which returns the index containing the smallest element in A [ i .. j ], without access to the array A at query time. We give a data structure whose space usage is 2 n + o ( n ) bits, which is asymptotically optimal for worst-case data, and answers RMQs in O (1) worst-case time. This matches the previous result of Fischer and Heun, but is obtained in a more natural way. Furthermore, our result can encode the RMQs of a random array A in 1.919 n + o ( n ) bits in expectation, which is not known to hold for Fischer and Heun’s result. We then generalize our result to the encoding range top-2 query (RT2Q) problem, which is like the encoding RMQ problem except that the query RT2Q( i , j ) returns the indices of both the smallest and second smallest elements of A [ i .. j ]. We introduce a data structure using 3.272 n + o ( n ) bits that answers RT2Qs in constant time, and also give lower bounds on the effective entropy of the RT2Q problem.


1999 ◽  
Vol 10 (01) ◽  
pp. 1-17 ◽  
Author(s):  
SEONGHUN CHO ◽  
SARTAJ SAHNI

We show that the leftist tree data structure may be adapted to obtain data structures that permit the double-ended priority queue operations Insert, DeleteMin, DeleteMax, and Merge to be done in O( log n) time where n is the size of the resulting queue. The operations FindMin and FindMax can be done in O(1) time. Experimental results are also presented.


1998 ◽  
Vol 8 ◽  
pp. 67-91 ◽  
Author(s):  
A. Moore ◽  
M. S. Lee

This paper introduces new algorithms and data structures for quick counting for machine learning datasets. We focus on the counting task of constructing contingency tables, but our approach is also applicable to counting the number of records in a dataset that match conjunctive queries. Subject to certain assumptions, the costs of these operations can be shown to be independent of the number of records in the dataset and loglinear in the number of non-zero entries in the contingency table. We provide a very sparse data structure, the ADtree, to minimize memory use. We provide analytical worst-case bounds for this structure for several models of data distribution. We empirically demonstrate that tractably-sized data structures can be produced for large real-world datasets by (a) using a sparse tree structure that never allocates memory for counts of zero, (b) never allocating memory for counts that can be deduced from other counts, and (c) not bothering to expand the tree fully near its leaves. We show how the ADtree can be used to accelerate Bayes net structure finding algorithms, rule learning algorithms, and feature selection algorithms, and we provide a number of empirical results comparing ADtree methods against traditional direct counting approaches. We also discuss the possible uses of ADtrees in other machine learning methods, and discuss the merits of ADtrees in comparison with alternative representations such as kd-trees, R-trees and Frequent Sets.


Author(s):  
F. Çetin ◽  
M. O. Kulekci

Abstract. This paper presents a study that compares the three space partitioning and spatial indexing techniques, KD Tree, Quad KD Tree, and PR Tree. KD Tree is a data structure proposed by Bentley (Bentley and Friedman, 1979) that aims to cluster objects according to their spatial location. Quad KD Tree is a data structure proposed by Berezcky (Bereczky et al., 2014) that aims to partition objects using heuristic methods. Unlike Bereczky’s partitioning technique, a new partitioning technique is presented based on dividing objects according to space-driven, in the context of this study. PR Tree is a data structure proposed by Arge (Arge et al., 2008) that is an asymptotically optimal R-Tree variant, enables data-driven segmentation. This study mainly aimed to search and render big spatial data in real-time safety-critical avionics navigation map application. Such a real-time system needs to efficiently reach the required records inside a specific boundary. Performing range query during the runtime (such as finding the closest neighbors) is extremely important in performance. The most crucial purpose of these data structures is to reduce the number of comparisons to solve the range searching problem. With this study, the algorithms’ data structures are created and indexed, and worst-case analyses are made to cover the whole area to measure the range search performance. Also, these techniques’ performance is benchmarked according to elapsed time and memory usage. As a result of these experimental studies, Quad KD Tree outperformed in range search analysis over the other techniques, especially when the data set is massive and consists of different geometry types.


10.29007/4w68 ◽  
2018 ◽  
Author(s):  
Giles Reger ◽  
Martin Suda

Reasoning in a saturation-based first-order theorem prover is generally expensive involving complex term-indexing data structures and inferences such as subsumption resolution whose (worst case) running time is exponential in the length of the clause. In contrast, SAT solvers are very cheap, being able to solve large problems quickly and with relatively little memory overhead.Consequently, utilising this cheap power within Vampire to carry out certain tasks has proven highly successful. We give an overview of the different ways that SAT solvers are utilisedwithin Vampire and discuss further ways in which this usage could be extended


Sign in / Sign up

Export Citation Format

Share Document