Expected time complexity of the auction algorithm and the push relabel algorithm for maximum bipartite matching on random graphs

2014 ◽  
Vol 48 (2) ◽  
pp. 384-395 ◽  
Author(s):  
Oshri Naparstek ◽  
Amir Leshem
2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Zheng Wang ◽  
Shian-Shyong Tseng

Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-ksorting when a priori probability distribution of the knee point is known. First, a top-ksort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection numberkis solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.


Author(s):  
I. Voyiatzis ◽  
K. Axiotis ◽  
N. Papaspyrou ◽  
H. Antonopoulou ◽  
C. Efstathiou

2016 ◽  
Author(s):  
Dogan Corus ◽  
Duc-Cuong Dang ◽  
Anton V. Eremeev ◽  
Per Kristian Lehre

AbstractUnderstanding how the time-complexity of evolutionary algorithms (EAs) depend on their parameter settings and characteristics of fitness landscapes is a fundamental problem in evolutionary computation. Most rigorous results were derived using a handful of key analytic techniques, including drift analysis. However, since few of these techniques apply effortlessly to population-based EAs, most time-complexity results concern simplified EAs, such as the (1 + 1) EA.This paper describes the level-based theorem, a new technique tailored to population-based processes. It applies to any non-elitist process where o spring are sampled independently from a distribution depending only on the current population. Given conditions on this distribution, our technique provides upper bounds on the expected time until the process reaches a target state.We demonstrate the technique on several pseudo-Boolean functions, the sorting problem, and approximation of optimal solutions in combina-torial optimisation. The conditions of the theorem are often straightfor-ward to verify, even for Genetic Algorithms and Estimation of Distribution Algorithms which were considered highly non-trivial to analyse. Finally, we prove that the theorem is nearly optimal for the processes considered. Given the information the theorem requires about the process, a much tighter bound cannot be proved.


2012 ◽  
Vol 23 (02) ◽  
pp. 357-374 ◽  
Author(s):  
PÉTER BURCSI ◽  
FERDINANDO CICALESE ◽  
GABRIELE FICI ◽  
ZSUZSANNA LIPTÁK

The Parikh vector p(s) of a string s over a finite ordered alphabet Σ = {a1, …, aσ} is defined as the vector of multiplicities of the characters, p(s) = (p1, …, pσ), where pi = |{j | sj = ai}|. Parikh vector q occurs in s if s has a substring t with p(t) = q. The problem of searching for a query q in a text s of length n can be solved simply and worst-case optimally with a sliding window approach in O(n) time. We present two novel algorithms for the case where the text is fixed and many queries arrive over time. The first algorithm only decides whether a given Parikh vector appears in a binary text. It uses a linear size data structure and decides each query in O(1) time. The preprocessing can be done trivially in Θ(n2) time. The second algorithm finds all occurrences of a given Parikh vector in a text over an arbitrary alphabet of size σ ≥ 2 and has sub-linear expected time complexity. More precisely, we present two variants of the algorithm, both using an O(n) size data structure, each of which can be constructed in O(n) time. The first solution is very simple and easy to implement and leads to an expected query time of [Formula: see text], where m = ∑i qi is the length of a string with Parikh vector q. The second uses wavelet trees and improves the expected runtime to [Formula: see text], i.e., by a factor of log m.


1990 ◽  
Vol 4 (3) ◽  
pp. 333-344 ◽  
Author(s):  
Vernon Rego

A simple random algorithm (SRA) is an algorithm whose behavior is governed by a first-order Markov chain. The expected time complexity of an SRA, given its initial state, is essentially the time to absorption of the underlying chain. The standard approach in computing the expected runtime is numerical. Under certain conditions on the probability transition matrix of an SRA, bounds on its expected runtime can be obtained using simple probabilistic arguments. In particular, one can obtain upper and lower (average time) logarithmic bounds for certain algorithms based on SRAs.


Sign in / Sign up

Export Citation Format

Share Document