worst case complexity
Recently Published Documents


TOTAL DOCUMENTS

101
(FIVE YEARS 23)

H-INDEX

18
(FIVE YEARS 1)

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 595
Author(s):  
Nicolas Delfosse ◽  
Naomi H. Nickerson

In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of O(nα(n)), where n is the number of physical qubits and α is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, α(n)≤3. We prove that our algorithm performs optimally for errors of weight up to (d−1)/2 and for loss of up to d−1 qubits, where d is the minimum distance of the code. Numerically, we obtain a threshold of 9.9% for the 2d-toric code with perfect syndrome measurements and 2.6% with faulty measurements.


Author(s):  
Cheng Lu ◽  
Dorit S. Hochbaum

AbstractWe study a 1-dimensional discrete signal denoising problem that consists of minimizing a sum of separable convex fidelity terms and convex regularization terms, the latter penalize the differences of adjacent signal values. This problem generalizes the total variation regularization problem. We provide here a unified approach to solve the problem for general convex fidelity and regularization functions that is based on the Karush–Kuhn–Tucker optimality conditions. This approach is shown here to lead to a fast algorithm for the problem with general convex fidelity and regularization functions, and a faster algorithm if, in addition, the fidelity functions are differentiable and the regularization functions are strictly convex. Both algorithms achieve the best theoretical worst case complexity over existing algorithms for the classes of objective functions studied here. Also in practice, our C++ implementation of the method is considerably faster than popular C++ nonlinear optimization solvers for the problem.


Author(s):  
Maximilian P. L. Haslbeck ◽  
Peter Lammich

AbstractWe present a framework to verify both, functional correctness and worst-case complexity of practically efficient algorithms. We implemented a stepwise refinement approach, using the novel concept of resource currencies to naturally structure the resource analysis along the refinement chain, and allow a fine-grained analysis of operation counts. Our framework targets the LLVM intermediate representation. We extend its semantics from earlier work with a cost model. As case study, we verify the correctness and $$O(n\log n)$$ O ( n log n ) worst-case complexity of an implementation of the introsort algorithm, whose performance is on par with the state-of-the-art implementation found in the GNU C++ Library.


Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 294
Author(s):  
Frantisek Franek ◽  
Michael Liut

There are two reasons to have an efficient algorithm for identifying all right-maximal Lyndon substrings of a string: firstly, Bannai et al. introduced in 2015 a linear algorithm to compute all runs of a string that relies on knowing all right-maximal Lyndon substrings of the input string, and secondly, Franek et al. showed in 2017 a linear equivalence of sorting suffixes and sorting right-maximal Lyndon substrings of a string, inspired by a novel suffix sorting algorithm of Baier. In 2016, Franek et al. presented a brief overview of algorithms for computing the Lyndon array that encodes the knowledge of right-maximal Lyndon substrings of the input string. Among those presented were two well-known algorithms for computing the Lyndon array: a quadratic in-place algorithm based on the iterated Duval algorithm for Lyndon factorization and a linear algorithmic scheme based on linear suffix sorting, computing the inverse suffix array, and applying to it the next smaller value algorithm. Duval’s algorithm works for strings over any ordered alphabet, while for linear suffix sorting, a constant or an integer alphabet is required. The authors at that time were not aware of Baier’s algorithm. In 2017, our research group proposed a novel algorithm for the Lyndon array. Though the proposed algorithm is linear in the average case and has O(nlog(n)) worst-case complexity, it is interesting as it emulates the fast Fourier algorithm’s recursive approach and introduces τ-reduction, which might be of independent interest. In 2018, we presented a linear algorithm to compute the Lyndon array of a string inspired by Phase I of Baier’s algorithm for suffix sorting. This paper presents the theoretical analysis of these two algorithms and provides empirical comparisons of both of their C++ implementations with respect to the iterated Duval algorithm.


Author(s):  
Frantisek Franek ◽  
Michael Liut

There are two reasons to have an efficient algorithm for identifying all maximal Lyndon substrings of a string: firstly, Bannai et al. introduced in 2015 a linear algorithm to compute all runs of a string that relies on knowing all maximal Lyndon substrings of the input string, and secondly, Franek et al. showed in 2017 a linear equivalence of sorting suffixes and sorting maximal Lyndon substrings of a string, inspired by a novel suffix sorting algorithm of Baier. In 2016, Franek et al. presented a brief overview of algorithms for computing the Lyndon array that encodes the knowledge of maximal Lyndon substrings of the input string. Among the presented were two well-known algorithms for computing the Lyndon array: a quadratic in-place algorithm based on iterated Duval's algorithm for Lyndon factorization, and a linear algorithmic scheme based on linear suffix sorting, computing inverse suffix array, and applying to it the Next Smaller Value algorithm. Duval's algorithm works for strings over any ordered alphabet, while for linear suffix sorting, a constant or an integer alphabet is required. The authors at that time were not aware of Baier's algorithm. In 2017, our research group proposed a novel algorithm for the Lyndon array. Though the proposed algorithm is linear in the average case and has O(n log(n)) worst-case complexity, it is interesting as it emulates the fast Fourier algorithm's recursive approach and introduces tau-reduction that might be of independent interest. In 2018, we presented a linear algorithm to compute the Lyndon array of a string inspired by Phase I of Baier's algorithm for suffix sorting. This paper presents theoretical analysis of these two algorithms and provides empirical comparisons of both their C++ implementations with respect to iterated Duval's algorithm.


Author(s):  
Jean-Daniel Boissonnat ◽  
Olivier Devillers ◽  
Kunal Dutta ◽  
Marc Glisse

Abstract Randomized incremental construction (RIC) is one of the most important paradigms for building geometric data structures. Clarkson and Shor developed a general theory that led to numerous algorithms which are both simple and efficient in theory and in practice. Randomized incremental constructions are usually space-optimal and time-optimal in the worst case, as exemplified by the construction of convex hulls, Delaunay triangulations, and arrangements of line segments. However, the worst-case scenario occurs rarely in practice and we would like to understand how RIC behaves when the input is nice in the sense that the associated output is significantly smaller than in the worst case. For example, it is known that the Delaunay triangulation of nicely distributed points in $${\mathbb {E}}^d$$ E d or on polyhedral surfaces in $${\mathbb {E}}^3$$ E 3 has linear complexity, as opposed to a worst-case complexity of $$\Theta (n^{\lfloor d/2\rfloor })$$ Θ ( n ⌊ d / 2 ⌋ ) in the first case and quadratic in the second. The standard analysis does not provide accurate bounds on the complexity of such cases and we aim at establishing such bounds in this paper. More precisely, we will show that, in the two cases above and variants of them, the complexity of the usual RIC is $$O(n\log n)$$ O ( n log n ) , which is optimal. In other words, without any modification, RIC nicely adapts to good cases of practical value. At the heart of our proof is a bound on the complexity of the Delaunay triangulation of random subsets of $${\varepsilon }$$ ε -nets. Along the way, we prove a probabilistic lemma for sampling without replacement, which may be of independent interest.


2020 ◽  
Vol 24 (23) ◽  
pp. 17609-17620 ◽  
Author(s):  
Yurii Nesterov

AbstractIn this paper, we suggest a new technique for soft clustering of multidimensional data. It is based on a new convex voting model, where each voter chooses a party with certain probability depending on the divergence between his/her preferences and the position of the party. The parties can react on the results of polls by changing their positions. We prove that under some natural assumptions this system has a unique fixed point, providing a unique solution for soft clustering. The solution of our model can be found either by imitation of the sequential elections, or by direct minimization of a convex potential function. In both cases, the methods converge linearly to the solution. We provide our methods with worst-case complexity bounds. To the best of our knowledge, these are the first polynomial-time complexity results in this field.


Sign in / Sign up

Export Citation Format

Share Document