scholarly journals Algorithmic Techniques for Necessary and Possible Winners

2021 ◽  
Vol 2 (3) ◽  
pp. 1-23
Author(s):  
Vishal Chakraborty ◽  
Theo Delemazure ◽  
Benny Kimelfeld ◽  
Phokion G. Kolaitis ◽  
Kunal Relia ◽  
...  

We investigate the practical aspects of computing the necessary and possible winners in elections over incomplete voter preferences. In the case of the necessary winners, we show how to implement and accelerate the polynomial-time algorithm of Xia and Conitzer. In the case of the possible winners, where the problem is NP-hard, we give a natural reduction to Integer Linear Programming (ILP) for all positional scoring rules and implement it in a leading commercial optimization solver. Further, we devise optimization techniques to minimize the number of ILP executions and, oftentimes, avoid them altogether. We conduct a thorough experimental study that includes the construction of a rich benchmark of election data based on real and synthetic data. Our findings suggest that, the worst-case intractability of the possible winners notwithstanding, the algorithmic techniques presented here scale well and can be used to compute the possible winners in realistic scenarios.

Author(s):  
Eduard Eiben ◽  
Robert Ganian ◽  
Dušan Knop ◽  
Sebastian Ordyniak

Recently a number of algorithmic results have appeared which show the tractability of Integer Linear Programming (ILP) instances under strong restrictions on variable domains and/or coefficients (AAAI 2016, AAAI 2017, IJCAI 2017). In this paper, we target ILPs where neither the variable domains nor the coefficients are restricted by a fixed constant or parameter; instead, we only require that our instances can be encoded in unary. We provide new algorithms and lower bounds for such ILPs by exploiting the structure of their variable interactions, represented as a graph. Our first set of results focuses on solving ILP instances through the use of a graph parameter called clique-width, which can be seen as an extension of treewidth which also captures well-structured dense graphs. In particular, we obtain a polynomial-time algorithm for instances of bounded clique-width whose domain and coefficients are polynomially bounded by the input size, and we complement this positive result by a number of algorithmic lower bounds. Afterwards, we turn our attention to ILPs with acyclic variable interactions. In this setting, we obtain a complexity map for the problem with respect to the graph representation used and restrictions on the encoding.


Energies ◽  
2019 ◽  
Vol 12 (4) ◽  
pp. 657 ◽  
Author(s):  
Georgios Psarros ◽  
Stavros Papathanassiou

The generation management concept for non-interconnected island (NII) systems is traditionally based on simple, semi-empirical operating rules dating back to the era before the massive deployment of renewable energy sources (RES), which do not achieve maximum RES penetration, optimal dispatch of thermal units and satisfaction of system security criteria. Nowadays, more advanced unit commitment (UC) and economic-dispatch (ED) approaches based on optimization techniques are gradually introduced to safeguard system operation against severe disturbances, to prioritize RES participation and to optimize dispatch of the thermal generation fleet. The main objective of this paper is to comparatively assess the traditionally applied priority listing (PL) UC method and a more sophisticated mixed integer linear programming (MILP) UC optimization approach, dedicated to NII power systems. Additionally, to facilitate the comparison of the UC approaches and quantify their impact on systems security, a first attempt is made to relate the primary reserves capability of each unit to the maximum acceptable frequency deviation at steady state conditions after a severe disturbance and the droop characteristic of the unit’s speed governor. The fundamental differences between the two approaches are presented and discussed, while daily and annual simulations are performed and the results obtained are further analyzed.


2013 ◽  
Vol 24 (07) ◽  
pp. 1067-1082 ◽  
Author(s):  
YO-SUB HAN ◽  
SANG-KI KO ◽  
KAI SALOMAA

The edit-distance between two strings is the smallest number of operations required to transform one string into the other. The distance between languages L1and L2is the smallest edit-distance between string wi∈ Li, i = 1, 2. We consider the problem of computing the edit-distance of a given regular language and a given context-free language. First, we present an algorithm that finds for the languages an optimal alignment, that is, a sequence of edit operations that transforms a string in one language to a string in the other. The length of the optimal alignment, in the worst case, is exponential in the size of the given grammar and finite automaton. Then, we investigate the problem of computing only the edit-distance of the languages without explicitly producing an optimal alignment. We design a polynomial time algorithm that calculates the edit-distance based on unary homomorphisms.


2005 ◽  
Vol 16 (05) ◽  
pp. 913-928 ◽  
Author(s):  
PIOTR FALISZEWSKI ◽  
LANE A. HEMASPAANDRA

Informally put, the semifeasible sets are the class of sets having a polynomial-time algorithm that, given as input any two strings of which at least one belongs to the set, will choose one that does belong to the set. We provide a tutorial overview of the advice complexity of the semifeasible sets. No previous familiarity with either the semifeasible sets or advice complexity will be assumed, and when we include proofs we will try to make the material as accessible as possible via providing intuitive, informal presentations. Karp and Lipton introduced advice complexity about a quarter of a century ago.18 Advice complexity asks, for a given power of interpreter, how many bits of "help" suffice to accept a given set. Thus, this is a notion that contains aspects both of descriptional/informational complexity and of computational complexity. We will see that for some powers of interpreter the (worst-case) complexity of the semifeasible sets is known right down to the bit (and beyond), but that for the most central power of interpreter—deterministic polynomial time—the complexity is currently known only to be at least linear and at most quadratic. While overviewing the advice complexity of the semifeasible sets, we will stress also the issue of whether the functions at the core of semifeasibility—so-called selector functions—can without cost be chosen to possess such algebraic properties as commutativity and associativity. We will see that this is relevant, in ways both potential and actual, to the study of the advice complexity of the semifeasible sets.


2013 ◽  
Vol 54 ◽  
Author(s):  
Jonas Mockus ◽  
Martynas Sabaliauskas

The Strategy Elimination (SE) algorithm was proposed in [2] and implemented by a sequence of Linear Programming (LP) problems. In this paper an efficient explicit solution is developed and the convergence to the Nash Equilibrium is proven.Keywords: game theory, polynomial algorithm, Nash equilibrium.


Entropy ◽  
2018 ◽  
Vol 20 (4) ◽  
pp. 274 ◽  
Author(s):  
◽  

Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments.


Sign in / Sign up

Export Citation Format

Share Document