scholarly journals The Average-Case Time Complexity of Certifying the Restricted Isometry Property

Author(s):  
Yunzi Ding ◽  
Dmitriy Kunisky ◽  
Alexander S. Wein ◽  
Afonso S. Bandeira
1995 ◽  
Vol 05 (02) ◽  
pp. 275-280 ◽  
Author(s):  
BEATE BOLLIG ◽  
MARTIN HÜHNE ◽  
STEFAN PÖLT ◽  
PETR SAVICKÝ

For circuits the expected delay is a suitable measure for the average case time complexity. In this paper, new upper and lower bounds on the expected delay of circuits for disjunction and conjunction are derived. The circuits presented yield asymptotically optimal expected delay for a wide class of distributions on the inputs even when the parameters of the distribution are not known in advance.


Author(s):  
Sanjay Ram ◽  
Somnath Pal

There are two approaches for classification of chemical reactions: Model-Driven and Data-Driven. In this paper, the authors develop an efficient algorithm based on a model-driven approach developed by Ugi and co-workers for classification of chemical reactions. The authors’ algorithm takes reaction matrix of a chemical reaction as input and generates its appropriate class as output. Reaction matrices being symmetric, matrix implementation of Ugi’s scheme using upper/lower tri-angular matrix is of O(n2) in terms of space complexity. Time complexity of similar matrix implementation is O(n4), both in worst case as well as in average case. The proposed algorithm uses two fixed size look-up tables in a novel way and requires constant space complexity. Time complexity both in worst and average cases of the algorithm is linear.


2011 ◽  
Vol 03 (04) ◽  
pp. 457-471 ◽  
Author(s):  
B. BALAMOHAN ◽  
P. FLOCCHINI ◽  
A. MIRI ◽  
N. SANTORO

In a network environment supporting mobile entities (called robots or agents), a black hole is a harmful site that destroys any incoming entity without leaving any visible trace. The black-hole search problit is the task of a team of k > 1 mobile entities, starting from the same safe location and executing the same algorithm, to determine within finite time the location of the black hole. In this paper, we consider the black hole search problit in asynchronous ring networks of n nodes, and focus on time complexity. It is known that any algorithm for black-hole search in a ring requires at least 2(n - 2) time in the worst case. The best known algorithm achieves this bound with a team of n - 1 agents with an average time cost of 2(n - 2), equal to the worst case. In this paper, we first show how the same number of agents using 2 extra time units in the worst case, can solve the problit in only [Formula: see text] time on the average. We then prove that the optimal average case complexity of [Formula: see text] can be achieved without increasing the worst case using 2(n - 1) agents. Finally, we design an algorithm that achieves asymptotically optimal both worst and average case time complexities itploying an optimal team of k = 2 agents, thus improving on the earlier results that required O(n) agents.


2020 ◽  
Author(s):  
Ahsan Sanaullah ◽  
Degui Zhi ◽  
Shaojie Zhang

AbstractDurbin’s PBWT, a scalable data structure for haplotype matching, has been successfully applied to identical by descent (IBD) segment identification and genotype imputation. Once the PBWT of a haplotype panel is constructed, it supports efficient retrieval of all shared long segments among all individuals (long matches) and efficient query between an external haplotype and the panel. However, the standard PBWT is an array-based static data structure and does not support dynamic updates of the panel. Here, we generalize the static PBWT to a dynamic data structure, d-PBWT, where the reverse prefix sorting at each position is represented by linked lists. We developed efficient algorithms for insertion and deletion of individual haplotypes. In addition, we verified that d-PBWT can support all algorithms of PBWT. In doing so, we systematically investigated variations of set maximal match and long match query algorithms: while they all have average case time complexity independent of database size, they have different worst case complexities, linear time complexity with the size of the genome, and dependency on additional data structures.


Author(s):  
Adrijan Božinovski ◽  
George Tanev ◽  
Biljana Stojčevska ◽  
Veno Pačovski ◽  
Nevena Ackovska

This paper presents the time complexity analysis of the Binary Tree Roll algorithm. The time complexity is analyzed theoretically and the results are then confirmed empirically. The theoretical analysis consists of finding recurrence relations for the time complexity, and solving them using various methods. The empirical analysis consists of exhaustively testing all trees with given numbers of nodes  and counting the minimum and maximum steps necessary to complete the roll algorithm. The time complexity is shown, both theoretically and empirically, to be linear in the best case and quadratic in the worst case, whereas its average case is shown to be dominantly linear for trees with a relatively small number of nodes and dominantly quadratic otherwise.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259736
Author(s):  
Arindam Saha ◽  
James A. R. Marshall ◽  
Andreagiovanni Reina

Node counting on a graph is subject to some fundamental theoretical limitations, yet a solution to such problems is necessary in many applications of graph theory to real-world systems, such as collective robotics and distributed sensor networks. Thus several stochastic and naïve deterministic algorithms for distributed graph size estimation or calculation have been provided. Here we present a deterministic and distributed algorithm that allows every node of a connected graph to determine the graph size in finite time, if an upper bound on the graph size is provided. The algorithm consists in the iterative aggregation of information in local hubs which then broadcast it throughout the whole graph. The proposed node-counting algorithm is on average more efficient in terms of node memory and communication cost than its previous deterministic counterpart for node counting, and appears comparable or more efficient in terms of average-case time complexity. As well as node counting, the algorithm is more broadly applicable to problems such as summation over graphs, quorum sensing, and spontaneous hierarchy creation.


Author(s):  
Sanjay Ram ◽  
Somnath Pal

There are two approaches for classification of chemical reactions: Model-Driven and Data-Driven. In this paper, the authors develop an efficient algorithm based on a model-driven approach developed by Ugi and co-workers for classification of chemical reactions. The authors’ algorithm takes reaction matrix of a chemical reaction as input and generates its appropriate class as output. Reaction matrices being symmetric, matrix implementation of Ugi’s scheme using upper/lower tri-angular matrix is of O(n2) in terms of space complexity. Time complexity of similar matrix implementation is O(n4), both in worst case as well as in average case. The proposed algorithm uses two fixed size look-up tables in a novel way and requires constant space complexity. Time complexity both in worst and average cases of the algorithm is linear.


2018 ◽  
Vol 25 (1) ◽  
pp. 123-134 ◽  
Author(s):  
Nodari Vakhania

AbstractThe computational complexity of an algorithm is traditionally measured for the worst and the average case. The worst-case estimation guarantees a certain worst-case behavior of a given algorithm, although it might be rough, since in “most instances” the algorithm may have a significantly better performance. The probabilistic average-case analysis claims to derive an average performance of an algorithm, say, for an “average instance” of the problem in question. That instance may be far away from the average of the problem instances arising in a given real-life application, and so the average case analysis would also provide a non-realistic estimation. We suggest that, in general, a wider use of probabilistic models for a more accurate estimation of the algorithm efficiency could be possible. For instance, the quality of the solutions delivered by an approximation algorithm may also be estimated in the “average” probabilistic case. Such an approach would deal with the estimation of the quality of the solutions delivered by the algorithm for the most common (for a given application) problem instances. As we illustrate, the probabilistic modeling can also be used to derive an accurate time complexity performance measure, distinct from the traditional probabilistic average-case time complexity measure. Such an approach could, in particular, be useful when the traditional average-case estimation is still rough or is not possible at all.


2012 ◽  
Vol 8 (4) ◽  
pp. 681026 ◽  
Author(s):  
Pu-Tai Yang ◽  
Seokcheon Lee

In recent years, there has been a rapid proliferation of research concerning Wireless Sensor Networks (WSNs), due to the wide range of potential applications that they can be used for. Lifetime is one of the most important considerations for WSNs due to their inherent energy constraints, and various protocols have been proposed to overcome these difficulties. This study proposes a novel distributed reclustering routing protocol: Predictive and Adaptive Routing Protocol using Energy Welfare (PARPEW). PARPEW incorporates the concept of energy welfare (EW) to achieve both energy efficiency and energy balance simultaneously. PARPEW is equipped with a cluster head (CH) shift mechanism that utilizes predictive energy after transmission for the computation of EW. The average case time complexity of the shift mechanism belongs to [Formula: see text], where [Formula: see text] is the average number of sensors in a cluster in the WSN. Experimental results demonstrate that the new protocol is capable of significantly prolonging the lifetime of WSNs under various scenarios.


1996 ◽  
Vol 05 (01n02) ◽  
pp. 127-141 ◽  
Author(s):  
MONINDER SINGH

One of the main factors limiting the use of path consistency algorithms in real life applications is their high space complexity. Han and Lee proposed a path consistency algorithm, PC-4, with O(n3a3) space complexity, which makes it practicable only for small problems. I present a new path consistency algorithm, PC-5, which has an O(n3a2) space complexity while retaining the worst-case time complexity of PC-4. Moreover, the new algorithm exhibits a much better average-case time complexity. The new algorithm is based on the idea (due to Bessiere) that, at any time, only a minimal amount of support has to be found and recorded for a labeling to establish its viability; one has to look for a new support only if the current support is eliminated. I also show that PC-5 can be improved further to yield an algorithm, PC5++, with even better average-case performance and the same space complexity.


Sign in / Sign up

Export Citation Format

Share Document