PATH CONSISTENCY REVISITED

1996 ◽  
Vol 05 (01n02) ◽  
pp. 127-141 ◽  
Author(s):  
MONINDER SINGH

One of the main factors limiting the use of path consistency algorithms in real life applications is their high space complexity. Han and Lee proposed a path consistency algorithm, PC-4, with O(n3a3) space complexity, which makes it practicable only for small problems. I present a new path consistency algorithm, PC-5, which has an O(n3a2) space complexity while retaining the worst-case time complexity of PC-4. Moreover, the new algorithm exhibits a much better average-case time complexity. The new algorithm is based on the idea (due to Bessiere) that, at any time, only a minimal amount of support has to be found and recorded for a labeling to establish its viability; one has to look for a new support only if the current support is eliminated. I also show that PC-5 can be improved further to yield an algorithm, PC5++, with even better average-case performance and the same space complexity.

Author(s):  
Sanjay Ram ◽  
Somnath Pal

There are two approaches for classification of chemical reactions: Model-Driven and Data-Driven. In this paper, the authors develop an efficient algorithm based on a model-driven approach developed by Ugi and co-workers for classification of chemical reactions. The authors’ algorithm takes reaction matrix of a chemical reaction as input and generates its appropriate class as output. Reaction matrices being symmetric, matrix implementation of Ugi’s scheme using upper/lower tri-angular matrix is of O(n2) in terms of space complexity. Time complexity of similar matrix implementation is O(n4), both in worst case as well as in average case. The proposed algorithm uses two fixed size look-up tables in a novel way and requires constant space complexity. Time complexity both in worst and average cases of the algorithm is linear.


Author(s):  
Sanjay Ram ◽  
Somnath Pal

There are two approaches for classification of chemical reactions: Model-Driven and Data-Driven. In this paper, the authors develop an efficient algorithm based on a model-driven approach developed by Ugi and co-workers for classification of chemical reactions. The authors’ algorithm takes reaction matrix of a chemical reaction as input and generates its appropriate class as output. Reaction matrices being symmetric, matrix implementation of Ugi’s scheme using upper/lower tri-angular matrix is of O(n2) in terms of space complexity. Time complexity of similar matrix implementation is O(n4), both in worst case as well as in average case. The proposed algorithm uses two fixed size look-up tables in a novel way and requires constant space complexity. Time complexity both in worst and average cases of the algorithm is linear.


2011 ◽  
Vol 03 (04) ◽  
pp. 457-471 ◽  
Author(s):  
B. BALAMOHAN ◽  
P. FLOCCHINI ◽  
A. MIRI ◽  
N. SANTORO

In a network environment supporting mobile entities (called robots or agents), a black hole is a harmful site that destroys any incoming entity without leaving any visible trace. The black-hole search problit is the task of a team of k > 1 mobile entities, starting from the same safe location and executing the same algorithm, to determine within finite time the location of the black hole. In this paper, we consider the black hole search problit in asynchronous ring networks of n nodes, and focus on time complexity. It is known that any algorithm for black-hole search in a ring requires at least 2(n - 2) time in the worst case. The best known algorithm achieves this bound with a team of n - 1 agents with an average time cost of 2(n - 2), equal to the worst case. In this paper, we first show how the same number of agents using 2 extra time units in the worst case, can solve the problit in only [Formula: see text] time on the average. We then prove that the optimal average case complexity of [Formula: see text] can be achieved without increasing the worst case using 2(n - 1) agents. Finally, we design an algorithm that achieves asymptotically optimal both worst and average case time complexities itploying an optimal team of k = 2 agents, thus improving on the earlier results that required O(n) agents.


2020 ◽  
Author(s):  
Ahsan Sanaullah ◽  
Degui Zhi ◽  
Shaojie Zhang

AbstractDurbin’s PBWT, a scalable data structure for haplotype matching, has been successfully applied to identical by descent (IBD) segment identification and genotype imputation. Once the PBWT of a haplotype panel is constructed, it supports efficient retrieval of all shared long segments among all individuals (long matches) and efficient query between an external haplotype and the panel. However, the standard PBWT is an array-based static data structure and does not support dynamic updates of the panel. Here, we generalize the static PBWT to a dynamic data structure, d-PBWT, where the reverse prefix sorting at each position is represented by linked lists. We developed efficient algorithms for insertion and deletion of individual haplotypes. In addition, we verified that d-PBWT can support all algorithms of PBWT. In doing so, we systematically investigated variations of set maximal match and long match query algorithms: while they all have average case time complexity independent of database size, they have different worst case complexities, linear time complexity with the size of the genome, and dependency on additional data structures.


Author(s):  
Adrijan Božinovski ◽  
George Tanev ◽  
Biljana Stojčevska ◽  
Veno Pačovski ◽  
Nevena Ackovska

This paper presents the time complexity analysis of the Binary Tree Roll algorithm. The time complexity is analyzed theoretically and the results are then confirmed empirically. The theoretical analysis consists of finding recurrence relations for the time complexity, and solving them using various methods. The empirical analysis consists of exhaustively testing all trees with given numbers of nodes  and counting the minimum and maximum steps necessary to complete the roll algorithm. The time complexity is shown, both theoretically and empirically, to be linear in the best case and quadratic in the worst case, whereas its average case is shown to be dominantly linear for trees with a relatively small number of nodes and dominantly quadratic otherwise.


Author(s):  
Adrijan Božinovski ◽  
George Tanev ◽  
Biljana Stojčevska ◽  
Veno Pačovski ◽  
Nevena Ackovska

This paper presents the space complexity analysis of the Binary Tree Roll algorithm. The space complexity is analyzed theoretically and the results are then confirmed empirically. The theoretical analysis consists of determining the amount of memory occupied during the execution of the algorithm and deriving functions of it, in terms of the number of nodes of the tree n, for the worst - and best-case scenarios. The empirical analysis of the space complexity consists of measuring the maximum and minimum amounts of memory occupied during the execution of the algorithm, for all binary tree topologies with the given number of nodes. The space complexity is shown, both theoretically and empirically, to be logarithmic in the best case and linear in the worst case, whereas its average case is shown to be dominantly logarithmic.


2018 ◽  
Vol 25 (1) ◽  
pp. 123-134 ◽  
Author(s):  
Nodari Vakhania

AbstractThe computational complexity of an algorithm is traditionally measured for the worst and the average case. The worst-case estimation guarantees a certain worst-case behavior of a given algorithm, although it might be rough, since in “most instances” the algorithm may have a significantly better performance. The probabilistic average-case analysis claims to derive an average performance of an algorithm, say, for an “average instance” of the problem in question. That instance may be far away from the average of the problem instances arising in a given real-life application, and so the average case analysis would also provide a non-realistic estimation. We suggest that, in general, a wider use of probabilistic models for a more accurate estimation of the algorithm efficiency could be possible. For instance, the quality of the solutions delivered by an approximation algorithm may also be estimated in the “average” probabilistic case. Such an approach would deal with the estimation of the quality of the solutions delivered by the algorithm for the most common (for a given application) problem instances. As we illustrate, the probabilistic modeling can also be used to derive an accurate time complexity performance measure, distinct from the traditional probabilistic average-case time complexity measure. Such an approach could, in particular, be useful when the traditional average-case estimation is still rough or is not possible at all.


Author(s):  
Shyantani Maiti ◽  
Sanjay Ram ◽  
Somnath Pal

The first step to predict the outcome of a chemical reaction is to classify existing chemical reactions, on the basis of which possible outcome of unknown reaction can be predicted. There are two approaches for classification of chemical reactions: Model-Driven and Data-Driven. In model-driven approach, chemical structures are usually stored in a computer as molecular graphs. Such graphs can also be represented as matrices. The most preferred matrix representation to store molecular graph is Bond-Electron matrix (BE-matrix). The Reaction matrix (R-matrix) of a chemical reaction can be obtained from the BE-matrices of educts and products was shown by Ugi and his co-workers. Ugi's Scheme comprises of 30 reaction classes according to which reactions can be classified, but in spite of such reaction classes there were several reactions which could not be classified. About 4000 reactions were studied in this work from The Chemical Thesaurus (a chemical reaction database) and accordingly 24 new classes have emerged which led to the extension of Ugi's Scheme. An efficient algorithm based on the extended Ugi's scheme have been developed for classification of chemical reactions. Reaction matrices being symmetric, matrix implementation of extended Ugi's scheme using conventional upper/lower tri-angular matrix is of O(n2) in terms of space complexity. Time complexity of similar matrix implementation is O(n2) in worst case. The authors' proposed algorithm uses two fixed size look-up tables in a novel way and requires constant space complexity. Worst case time complexity of their algorithm although still O(n2) but it outperforms conventional matrix implementation when number of atoms or components in the chemical reaction is 4 or more.


Sorting is the basic activity in the field of computer science and it is commonly used in searching for information and data. The main goal of sorting is to make reports or records easier to edit, delete and search, etc. It organizes the given data in any sequence. There are many sorting algorithms like insertion sort, bubble sort, radix sort, heap sort, and so forth. Bubble sort and insertion sort are clearly described with algorithms and examples. In this paper, the bubble sort and insertion sort performance analysis is carried out by calculating the time complexity. These algorithm time complexities have been calculated by implementing in the rust and python languages and observed the best case, average case, and worst case. The flowchart shows the complete workflow of this study. The results have been shown graphically and time complexity has been shown in a tabular form. We have compared the efficiency of bubble sort and insertion sort algorithms in the rust and python platforms. The rust language is more efficient than python for both bubble and insertion sort algorithms. However, it is observed insertion sort is more efficient than the bubble sort algorithm.


2020 ◽  
Vol 15 (1) ◽  
pp. 60-71
Author(s):  
Thijs Laarhoven

AbstractWe revisit the approximate Voronoi cells approach for solving the closest vector problem with preprocessing (CVPP) on high-dimensional lattices, and settle the open problem of Doulgerakis–Laarhoven–De Weger [PQCrypto, 2019] of determining exact asymptotics on the volume of these Voronoi cells under the Gaussian heuristic. As a result, we obtain improved upper bounds on the time complexity of the randomized iterative slicer when using less than $2^{0.076d + o(d)}$ memory, and we show how to obtain time–memory trade-offs even when using less than $2^{0.048d + o(d)}$ memory. We also settle the open problem of obtaining a continuous trade-off between the size of the advice and the query time complexity, as the time complexity with subexponential advice in our approach scales as $d^{d/2 + o(d)}$ matching worst-case enumeration bounds, and achieving the same asymptotic scaling as average-case enumeration algorithms for the closest vector problem.


Sign in / Sign up

Export Citation Format

Share Document