scholarly journals Llull and Copeland Voting Computationally Resist Bribery and Constructive Control

2009 ◽  
Vol 35 ◽  
pp. 275-341 ◽  
Author(s):  
P. Faliszewski ◽  
E. Hemaspaandra ◽  
L. A. Hemaspaandra ◽  
J. Rothe

Control and bribery are settings in which an external agent seeks to influence the outcome of an election. Constructive control of elections refers to attempts by an agent to, via such actions as addition/deletion/partition of candidates or voters, ensure that a given candidate wins. Destructive control refers to attempts by an agent to, via the same actions, preclude a given candidate's victory. An election system in which an agent can sometimes affect the result and it can be determined in polynomial time on which inputs the agent can succeed is said to be vulnerable to the given type of control. An election system in which an agent can sometimes affect the result, yet in which it is NP-hard to recognize the inputs on which the agent can succeed, is said to be resistant to the given type of control. Aside from election systems with an NP-hard winner problem, the only systems previously known to be resistant to all the standard control types were highly artificial election systems created by hybridization. This paper studies a parameterized version of Copeland voting, denoted by Copeland^\alpha, where the parameter \alpha is a rational number between 0 and 1 that specifies how ties are valued in the pairwise comparisons of candidates. In every previously studied constructive or destructive control scenario, we determine which of resistance or vulnerability holds for Copeland^\alpha for each rational \alpha, 0 \leq \alpha \leq 1. In particular, we prove that Copeland^{0.5}, the system commonly referred to as ``Copeland voting,'' provides full resistance to constructive control, and we prove the same for Copeland^\alpha, for all rational \alpha, 0 < \alpha < 1. Among systems with a polynomial-time winner problem, Copeland voting is the first natural election system proven to have full resistance to constructive control. In addition, we prove that both Copeland^0 and Copeland^1 (interestingly, Copeland^1 is an election system developed by the thirteenth-century mystic Llull) are resistant to all standard types of constructive control other than one variant of addition of candidates. Moreover, we show that for each rational \alpha, 0 \leq \alpha \leq 1, Copeland^\alpha voting is fully resistant to bribery attacks, and we establish fixed-parameter tractability of bounded-case control for Copeland^\alpha. We also study Copeland^\alpha elections under more flexible models such as microbribery and extended control, we integrate the potential irrationality of voter preferences into many of our results, and we prove our results in both the unique-winner model and the nonunique-winner model. Our vulnerability results for microbribery are proven via a novel technique involving min-cost network flow.

2013 ◽  
Vol 47 ◽  
pp. 475-519 ◽  
Author(s):  
N. Betzler ◽  
A. Slinko ◽  
J. Uhlmann

We investigate two systems of fully proportional representation suggested by Chamberlin Courant and Monroe. Both systems assign a representative to each voter so that the "sum of misrepresentations" is minimized. The winner determination problem for both systems is known to be NP-hard, hence this work aims at investigating whether there are variants of the proposed rules and/or specific electorates for which these problems can be solved efficiently. As a variation of these rules, instead of minimizing the sum of misrepresentations, we considered minimizing the maximal misrepresentation introducing effectively two new rules. In the general case these "minimax" versions of classical rules appeared to be still NP-hard. We investigated the parameterized complexity of winner determination of the two classical and two new rules with respect to several parameters. Here we have a mixture of positive and negative results: e.g., we proved fixed-parameter tractability for the parameter the number of candidates but fixed-parameter intractability for the number of winners. For single-peaked electorates our results are overwhelmingly positive: we provide polynomial-time algorithms for most of the considered problems. The only rule that remains NP-hard for single-peaked electorates is the classical Monroe rule.


Author(s):  
Sushmita Gupta ◽  
Saket Saurabh ◽  
Ramanujan Sridharan ◽  
Meirav Zehavi

Single-elimination tournaments are a popular format in competitive environments. The Tournament Fixing Problem (TFP), which is the problem of finding a seeding of the players such that a certain player wins the resulting tournament, is known to be NP-hard in general and fixed-parameter tractable when parameterized by the feedback arc set number of the input tournament (an oriented complete graph) of expected wins/loses. However, the existence of polynomial kernelizations (efficient preprocessing) for TFP has remained open. In this paper, we present the first polynomial kernelization for TFP parameterized by the feedback arc set number of the input tournament. We achieve this by providing a polynomial-time routine that computes a SAT encoding where the number of clauses is bounded polynomially in the feedback arc set number.


2015 ◽  
Vol 53 ◽  
pp. 439-496 ◽  
Author(s):  
Felix Brandt ◽  
Markus Brill ◽  
Edith Hemaspaandra ◽  
Lane A. Hemaspaandra

For many election systems, bribery (and related) attacks have been shown NP-hard using constructions on combinatorially rich structures such as partitions and covers. This paper shows that for voters who follow the most central political-science model of electorates---single-peaked preferences---those hardness protections vanish. By using single-peaked preferences to simplify combinatorial covering challenges, we for the first time show that NP-hard bribery problems---including those for Kemeny and Llull elections---fall to polynomial time for single-peaked electorates. By using single-peaked preferences to simplify combinatorial partition challenges, we for the first time show that NP-hard partition-of-voters problems fall to polynomial time for single-peaked electorates. We show that for single-peaked electorates, the winner problems for Dodgson and Kemeny elections, though Theta-two-complete in the general case, fall to polynomial time. And we completely classify the complexity of weighted coalition manipulation for scoring protocols in single-peaked electorates.


2020 ◽  
Author(s):  
Uéverton Souza ◽  
Fábio Protti ◽  
Maise Da Silva ◽  
Dieter Rautenbach

In this thesis we present a multivariate investigation of the complexity of some NP-hard problems, i.e., we first develop a systematic complexity analysis of these problems, defining its subproblems and mapping which one belongs to each side of an “imaginary boundary” between polynomial time solvability and intractability. After that, we analyze which sets of aspects of these problems are sources of their intractability, that is, subsets of aspects for which there exists an algorithm to solve the associated problem, whose non-polynomial time complexity is purely a function of those sets. Thus, we use classical and parameterized complexity in an alternate and complementary approach, to show which subproblems of the given problems are NP-hard and latter to diagnose for which sets of parameters the problems are fixed-parameter tractable, or in FPT. This thesis exhibits a classical and parameterized complexity analysis of different groups of NP-hard problems. The addressed problems are divided into four groups of distinct nature, in the context of data structures, combinatorial games, and graph theory: (I) and/or graph solution and its variants; (II) flooding-filling games; (III) problems on P3-convexity; (IV) problems on induced matchings.


10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.


Author(s):  
Robert Ganian ◽  
Andre Schidler ◽  
Manuel Sorge ◽  
Stefan Szeider

Treewidth and hypertree width have proven to be highly successful structural parameters in the context of the Constraint Satisfaction Problem (CSP). When either of these parameters is bounded by a constant, then CSP becomes solvable in polynomial time. However, here the order of the polynomial in the running time depends on the width, and this is known to be unavoidable; therefore, the problem is not fixed-parameter tractable parameterized by either of these width measures. Here we introduce an enhancement of tree and hypertree width through a novel notion of thresholds, allowing the associated decompositions to take into account information about the computational costs associated with solving the given CSP instance. Aside from introducing these notions, we obtain efficient theoretical as well as empirical algorithms for computing threshold treewidth and hypertree width and show that these parameters give rise to fixed-parameter algorithms for CSP as well as other, more general problems. We complement our theoretical results with experimental evaluations in terms of heuristics as well as exact methods based on SAT/SMT encodings.


Author(s):  
Marko Samer ◽  
Stefan Szeider

Parameterized complexity is a new theoretical framework that considers, in addition to the overall input size, the effects on computational complexity of a secondary measurement, the parameter. This two-dimensional viewpoint allows a fine-grained complexity analysis that takes structural properties of problem instances into account. The central notion is “fixed-parameter tractability” which refers to solvability in polynomial time for each fixed value of the parameter such that the order of the polynomial time bound is independent of the parameter. This chapter presents main concepts and recent results on the parameterized complexity of the satisfiability problem and it outlines fundamental algorithmic ideas that arise in this context. Among the parameters considered are the size of backdoor sets with respect to various tractable base classes and the treewidth of graph representations of satisfiability instances.


2020 ◽  
Vol 34 (06) ◽  
pp. 10110-10117
Author(s):  
Andrew Estornell ◽  
Sanmay Das ◽  
Yevgeniy Vorobeychik

Deception is a fundamental issue across a diverse array of settings, from cybersecurity, where decoys (e.g., honeypots) are an important tool, to politics that can feature politically motivated “leaks” and fake news about candidates. Typical considerations of deception view it as providing false information. However, just as important but less frequently studied is a more tacit form where information is strategically hidden or leaked. We consider the problem of how much an adversary can affect a principal's decision by “half-truths”, that is, by masking or hiding bits of information, when the principal is oblivious to the presence of the adversary. The principal's problem can be modeled as one of predicting future states of variables in a dynamic Bayes network, and we show that, while theoretically the principal's decisions can be made arbitrarily bad, the optimal attack is NP-hard to approximate, even under strong assumptions favoring the attacker. However, we also describe an important special case where the dependency of future states on past states is additive, in which we can efficiently compute an approximately optimal attack. Moreover, in networks with a linear transition function we can solve the problem optimally in polynomial time.


Sign in / Sign up

Export Citation Format

Share Document