scholarly journals Average-case Analysis of the Assignment Problem with Independent Preferences

Author(s):  
Yansong Gao ◽  
Jie Zhang

The fundamental assignment problem is in search of welfare maximization mechanisms to allocate items to agents when the private preferences over indivisible items are provided by self-interested agents. The mainstream mechanism \textit{Random Priority} is asymptotically the best mechanism for this purpose, when comparing its welfare  to the optimal social welfare using the canonical \textit{worst-case approximation ratio}.  Surprisingly, the efficiency loss indicated by the worst-case ratio does not have a constant bound \cite{FFZ:14}.Recently, \cite{DBLP:conf/mfcs/DengG017} shows that when the agents' preferences are drawn from a uniform distribution, its \textit{average-case approximation ratio} is upper bounded by 3.718. They left it as an open question of whether a constant ratio holds for general scenarios. In this paper, we offer an affirmative answer to this question by showing that the ratio is bounded by $1/\mu$ when the preference values are independent and identically distributed random variables, where $\mu$ is the expectation of the value distribution. This upper bound improves the results in \cite{DBLP:conf/mfcs/DengG017} for the Uniform distribution as well. Moreover, under mild conditions, the ratio has a \textit{constant} bound for any independent  random values. En route to these results, we develop powerful tools to show the insights that for most valuation inputs, the efficiency loss is small.

Algorithmica ◽  
2021 ◽  
Author(s):  
Jie Zhang

AbstractApart from the principles and methodologies inherited from Economics and Game Theory, the studies in Algorithmic Mechanism Design typically employ the worst-case analysis and design of approximation schemes of Theoretical Computer Science. For instance, the approximation ratio, which is the canonical measure of evaluating how well an incentive-compatible mechanism approximately optimizes the objective, is defined in the worst-case sense. It compares the performance of the optimal mechanism against the performance of a truthful mechanism, for all possible inputs. In this paper, we take the average-case analysis approach, and tackle one of the primary motivating problems in Algorithmic Mechanism Design—the scheduling problem (Nisan and Ronen, in: Proceedings of the 31st annual ACM symposium on theory of computing (STOC), 1999). One version of this problem, which includes a verification component, is studied by Koutsoupias (Theory Comput Syst 54(3):375–387, 2014). It was shown that the problem has a tight approximation ratio bound of $$(n+1)/2$$ ( n + 1 ) / 2 for the single-task setting, where n is the number of machines. We show, however, when the costs of the machines to executing the task follow any independent and identical distribution, the average-case approximation ratio of the mechanism given by Koutsoupias (Theory Comput Syst 54(3):375–387, 2014) is upper bounded by a constant. This positive result asymptotically separates the average-case ratio from the worst-case ratio. It indicates that the optimal mechanism devised for a worst-case guarantee works well on average.


Author(s):  
Krzysztof A. Sikorski

In this chapter we consider the approximation of fixed points of noncontractive functions with respect to the absolute error criterion. In this case the functions may have multiple and/or whole manifolds of fixed points. We analyze methods based on sequential function evaluations as information. The simple iteration usually does not converge in this case, and the problem becomes much more difficult to solve. We prove that even in the two-dimensional case the problem has infinite worst case complexity. This means that no methods exist that solve the problem with arbitrarily small error tolerance for some “bad” functions. In the univariate case the problem is solvable, and a bisection envelope method is optimal. These results are in contrast with the solution under the residual error criterion. The problem then becomes solvable, although with exponential complexity, as outlined in the annotations. Therefore, simplicial and/or homotopy continuation and all methods based on function evaluations exhibit exponential worst case cost for solving the problem in the residual sense. These results indicate the need of average case analysis, since for many test functions the existing algorithms computed ε-approximations with polynomial in 1/ε cost.


Information ◽  
2020 ◽  
Vol 11 (11) ◽  
pp. 506
Author(s):  
Huda Chuangpishit ◽  
Konstantinos Georgiou ◽  
Preeti Sharma

The problem of evacuating two robots from the disk in the face-to-face model was first introduced by Czyzowicz et al. [DISC’2014], and has been extensively studied (along with many variations) ever since with respect to worst-case analysis. We initiate the study of the same problem with respect to average-case analysis, which is also equivalent to designing randomized algorithms for the problem. In particular, we introduce constrained optimization problem 2EvacF2F, in which one is trying to minimize the average-case cost of the evacuation algorithm given that the worst-case cost does not exceed w. The problem is of special interest with respect to practical applications, since a common objective in search-and-rescue operations is to minimize the average completion time, given that a certain worst-case threshold is not exceeded, e.g., for safety or limited energy reasons. Our main contribution is the design and analysis of families of new evacuation parameterized algorithms which can solve 2EvacF2F, for every w for which the problem is feasible. Notably, the worst-case analysis of the problem, since its introduction, has been relying on technical numerical, computer-assisted calculations, following tedious robot trajectory analysis. Part of our contribution is a novel systematic procedure, which given any evacuation algorithm, can derive its worst- and average-case performance in a clean and unified way.


2013 ◽  
Vol 10 (4) ◽  
pp. 1-38
Author(s):  
Dieter Schuller ◽  
Ulrich Lampe ◽  
Julian Eckert ◽  
Ralf Steinmetz ◽  
Stefan Schulte

The challenge of optimally selecting services from a set of functionally appropriate ones under Quality of Service (QoS) constraints – the Service Selection Problem – has been extensively addressed in the literature based on deterministic parameters. In practice, however, Quality of Service QoS parameters rather follow a stochastic distribution. In the work at hand, we present an integrated approach which addresses the Service Selection Problem for complex structured as well as unstructured workflows in conjunction with stochastic Quality of Service parameters. Accounting for penalty cost which accrue due to Quality of Service violations, we perform a worst-case analysis as opposed to an average-case analysis aiming at avoiding additional penalties. Although considering conservative computations, QoS violations due to stochastic QoS behavior still may occur resulting in potentially severe penalties. Our proposed approach reduces this impact of stochastic QoS behavior on total cost significantly.


1993 ◽  
Vol 5 (5) ◽  
pp. 767-782 ◽  
Author(s):  
Mostefa Golea ◽  
Mario Marchand

We present an algorithm that PAC learns any perceptron with binary weights and arbitrary threshold under the family of product distributions. The sample complexity of this algorithm is of O[(n/ε)4 ln(n/δ)] and its running time increases only linearly with the number of training examples. The algorithm does not try to find an hypothesis that agrees with all of the training examples; rather, it constructs a binary perceptron based on various probabilistic estimates obtained from the training examples. We show that, under the restricted case of the uniform distribution and zero threshold, the algorithm reduces to the well known clipped Hebb rule. We calculate exactly the average generalization rate (i.e., the learning curve) of the algorithm, under the uniform distribution, in the limit of an infinite number of dimensions. We find that the error rate decreases exponentially as a function of the number of training examples. Hence, the average case analysis gives a sample complexity of O[n ln(1/ε)], a large improvement over the PAC learning analysis. The analytical expression of the learning curve is in excellent agreement with the extensive numerical simulations. In addition, the algorithm is very robust with respect to classification noise.


Sign in / Sign up

Export Citation Format

Share Document