scholarly journals Average case complexity under the universal distribution equals worst-case complexity

1992 ◽  
Vol 42 (3) ◽  
pp. 145-149 ◽  
Author(s):  
Ming Li ◽  
Paul M.B. Vitányi
10.37236/6354 ◽  
2017 ◽  
Vol 24 (2) ◽  
Author(s):  
Carsten Schneider ◽  
Robin Sulzgruber

The Novelli-Pak-Stoyanovskii algorithm is a sorting algorithm for Young tableaux of a fixed shape that was originally devised to give a bijective proof of the hook-length formula. We obtain new asymptotic results on the average case and worst case complexity of this algorithm as the underlying shape tends to a fixed limit curve. Furthermore, using the summation package Sigma we prove an exact formula for the average case complexity when the underlying shape consists of only two rows. We thereby answer questions posed by Krattenthaler and Müller.


2011 ◽  
Vol 03 (04) ◽  
pp. 457-471 ◽  
Author(s):  
B. BALAMOHAN ◽  
P. FLOCCHINI ◽  
A. MIRI ◽  
N. SANTORO

In a network environment supporting mobile entities (called robots or agents), a black hole is a harmful site that destroys any incoming entity without leaving any visible trace. The black-hole search problit is the task of a team of k > 1 mobile entities, starting from the same safe location and executing the same algorithm, to determine within finite time the location of the black hole. In this paper, we consider the black hole search problit in asynchronous ring networks of n nodes, and focus on time complexity. It is known that any algorithm for black-hole search in a ring requires at least 2(n - 2) time in the worst case. The best known algorithm achieves this bound with a team of n - 1 agents with an average time cost of 2(n - 2), equal to the worst case. In this paper, we first show how the same number of agents using 2 extra time units in the worst case, can solve the problit in only [Formula: see text] time on the average. We then prove that the optimal average case complexity of [Formula: see text] can be achieved without increasing the worst case using 2(n - 1) agents. Finally, we design an algorithm that achieves asymptotically optimal both worst and average case time complexities itploying an optimal team of k = 2 agents, thus improving on the earlier results that required O(n) agents.


Author(s):  
Frantisek Franek ◽  
Michael Liut

There are two reasons to have an efficient algorithm for identifying all maximal Lyndon substrings of a string: firstly, Bannai et al. introduced in 2015 a linear algorithm to compute all runs of a string that relies on knowing all maximal Lyndon substrings of the input string, and secondly, Franek et al. showed in 2017 a linear equivalence of sorting suffixes and sorting maximal Lyndon substrings of a string, inspired by a novel suffix sorting algorithm of Baier. In 2016, Franek et al. presented a brief overview of algorithms for computing the Lyndon array that encodes the knowledge of maximal Lyndon substrings of the input string. Among the presented were two well-known algorithms for computing the Lyndon array: a quadratic in-place algorithm based on iterated Duval's algorithm for Lyndon factorization, and a linear algorithmic scheme based on linear suffix sorting, computing inverse suffix array, and applying to it the Next Smaller Value algorithm. Duval's algorithm works for strings over any ordered alphabet, while for linear suffix sorting, a constant or an integer alphabet is required. The authors at that time were not aware of Baier's algorithm. In 2017, our research group proposed a novel algorithm for the Lyndon array. Though the proposed algorithm is linear in the average case and has O(n log(n)) worst-case complexity, it is interesting as it emulates the fast Fourier algorithm's recursive approach and introduces tau-reduction that might be of independent interest. In 2018, we presented a linear algorithm to compute the Lyndon array of a string inspired by Phase I of Baier's algorithm for suffix sorting. This paper presents theoretical analysis of these two algorithms and provides empirical comparisons of both their C++ implementations with respect to iterated Duval's algorithm.


2007 ◽  
Vol 28 ◽  
pp. 157-181 ◽  
Author(s):  
A. D. Procaccia ◽  
J. S. Rosenschein

Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant.


Computing ◽  
1998 ◽  
Vol 61 (2) ◽  
pp. 103-131 ◽  
Author(s):  
L. Szirmay-Kalos ◽  
G. Márton

Sign in / Sign up

Export Citation Format

Share Document