scholarly journals Mechanisms for Fair Allocation Problems: No-Punishment Payment Rules in Verifiable Settings

2014 ◽  
Vol 49 ◽  
pp. 403-449 ◽  
Author(s):  
G. Greco ◽  
F. Scarcello

Mechanism design is considered in the context of fair allocations of indivisible goods with monetary compensation, by focusing on problems where agents' declarations on allocated goods can be verified before payments are performed. A setting is considered where verification might be subject to errors, so that payments have to be awarded under the presumption of innocence, as incorrect declared values do not necessarily mean manipulation attempts by the agents. Within this setting, a mechanism is designed that is shown to be truthful, efficient, and budget-balanced. Moreover, agents' utilities are fairly determined by the Shapley value of suitable coalitional games, and enjoy highly desirable properties such as equal treatment of equals, envy-freeness, and a stronger one called individual-optimality. In particular, the latter property guarantees that, for every agent, her/his utility is the maximum possible one over any alternative optimal allocation. The computational complexity of the proposed mechanism is also studied. It turns out that it is #P-complete so that, to deal with applications with many agents involved, two polynomial-time randomized variants are also proposed: one that is still truthful and efficient, and which is approximately budget-balanced with high probability, and another one that is truthful in expectation, while still budget-balanced and efficient.

2021 ◽  
pp. 103633
Author(s):  
Mohammad Ghodsi ◽  
MohammadTaghi HajiAghayi ◽  
Masoud Seddighin ◽  
Saeed Seddighin ◽  
Hadi Yami

2018 ◽  
Vol 2018 ◽  
pp. 1-15
Author(s):  
Math J. J. M. Candel

If there are no carryover effects, AB/BA crossover designs are more efficient than parallel (A/B) and extended parallel (AA/BB) group designs. This study extends these results in that (a) optimal instead of equal treatment allocation is examined, (b) allowance for treatment-dependent outcome variances is made, and (c) next to treatment effects, also treatment by period interaction effects are examined. Starting from a linear mixed model analysis, the optimal allocation requires knowledge on intraclass correlations in A and B, which typically is rather vague. To solve this, maximin versions of the designs are derived, which guarantee a power level across plausible ranges of the intraclass correlations at the lowest research costs. For the treatment effect, an extensive numerical evaluation shows that if the treatment costs of A and B are equal, or if the sum of the costs of one treatment and measurement per person is less than the remaining subject-specific costs (e.g., recruitment costs), the maximin crossover design is most efficient for ranges of intraclass correlations starting at 0.15 or higher. For other cost scenarios, the maximin parallel or extended parallel design can also become most efficient. For the treatment by period interaction, the maximin AA/BB design can be proven to be the most efficient. A simulation study supports these asymptotic results for small samples.


2020 ◽  
Vol 34 (02) ◽  
pp. 2260-2267
Author(s):  
Haibin Wang ◽  
Sujoy Sikdar ◽  
Xiaoxi Guo ◽  
Lirong Xia ◽  
Yongzhi Cao ◽  
...  

We propose multi-type probabilistic serial (MPS) and multi-type random priority (MRP) as extensions of the well-known PS and RP mechanisms to the multi-type resource allocation problems (MTRAs) with partial preferences. In our setting, there are multiple types of divisible items, and a group of agents who have partial order preferences over bundles consisting of one item of each type. We show that for the unrestricted domain of partial order preferences, no mechanism satisfies both sd-efficiency and sd-envy-freeness. Notwithstanding this impossibility result, our main message is positive: When agents' preferences are represented by acyclic CP-nets, MPS satisfies sd-efficiency, sd-envy-freeness, ordinal fairness, and upper invariance, while MRP satisfies ex-post-efficiency, sd-strategyproofness, and upper invariance, recovering the properties of PS and RP. Besides, we propose a hybrid mechanism, multi-type general dictatorship (MGD), combining the ideas of MPS and MRP, which satisfies sd-efficiency, equal treatment of equals and decomposability under the unrestricted domain of partial order preferences.


2017 ◽  
Vol 242 ◽  
pp. 1-22 ◽  
Author(s):  
Yann Chevaleyre ◽  
Ulle Endriss ◽  
Nicolas Maudet

1998 ◽  
Vol 3 (3) ◽  
pp. 195-213 ◽  
Author(s):  
Carmen Beviá

Author(s):  
Arpita Biswas ◽  
Siddharth Barman

We consider the problem of fairly allocating indivisible goods, among agents, under cardinality constraints and additive valuations. In this setting, we are given a partition of the entire set of goods---i.e., the goods are categorized---and a limit is specified on the number of goods that can be allocated from each category to any agent. The objective here is to find a fair allocation in which the subset of goods assigned to any agent satisfies the given cardinality constraints. This problem naturally captures a number of resource-allocation applications, and is a generalization of the well-studied unconstrained fair division problem.  The two central notions of fairness, in the context of fair division of indivisible goods, are envy freeness up to one good (EF1) and the (approximate) maximin share guarantee (MMS). We show that the existence and algorithmic guarantees established for these solution concepts in the unconstrained setting can essentially be achieved under cardinality constraints. Furthermore, focusing on the case wherein all the agents have the same additive valuation, we establish that EF1 allocations exist even under matroid constraints.


Author(s):  
Thomas Bläsius ◽  
Philipp Fischbeck ◽  
Tobias Friedrich ◽  
Maximilian Katzmann

AbstractThe computational complexity of the VertexCover problem has been studied extensively. Most notably, it is NP-complete to find an optimal solution and typically NP-hard to find an approximation with reasonable factors. In contrast, recent experiments suggest that on many real-world networks the run time to solve VertexCover is way smaller than even the best known FPT-approaches can explain. We link these observations to two properties that are observed in many real-world networks, namely a heterogeneous degree distribution and high clustering. To formalize these properties and explain the observed behavior, we analyze how a branch-and-reduce algorithm performs on hyperbolic random graphs, which have become increasingly popular for modeling real-world networks. In fact, we are able to show that the VertexCover problem on hyperbolic random graphs can be solved in polynomial time, with high probability. The proof relies on interesting structural properties of hyperbolic random graphs. Since these predictions of the model are interesting in their own right, we conducted experiments on real-world networks showing that these properties are also observed in practice.


2020 ◽  
Vol 68 ◽  
pp. 225-245
Author(s):  
Peter McGlaughlin ◽  
Jugal Garg

We consider the problem of fairly allocating a set of indivisible goods among n agents. Various fairness notions have been proposed within the rapidly growing field of fair division, but the Nash social welfare (NSW) serves as a focal point. In part, this follows from the ‘unreasonable’ fairness guarantees provided, in the sense that a max NSW allocation meets multiple other fairness metrics simultaneously, all while satisfying a standard economic concept of efficiency, Pareto optimality. However, existing approximation algorithms fail to satisfy all of the remarkable fairness guarantees offered by a max NSW allocation, instead targeting only the specific NSW objective. We address this issue by presenting a 2 max NSW, Prop-1, 1/(2n) MMS, and Pareto optimal allocation in strongly polynomial time. Our techniques are based on a market interpretation of a fractional max NSW allocation. We present novel definitions of fairness concepts in terms of market prices, and design a new scheme to round a market equilibrium into an integral allocation in a way that provides most of the fairness properties of an integral max NSW allocation.


Author(s):  
Mohammad Ghodsi ◽  
Mohammad Taghi Hajiaghayi ◽  
Masoud Seddighin ◽  
Saeed Seddighin ◽  
Hadi Yami

We study the problem of fair allocation for indivisible goods. We use the maximin share paradigm introduced by Budish [Budish E (2011) The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. J. Political Econom. 119(6):1061–1103.] as a measure of fairness. Kurokawa et al. [Kurokawa D, Procaccia AD, Wang J (2018) Fair enough: Guaranteeing approximate maximin shares. J. ACM 65(2):8.] were the first to investigate this fundamental problem in the additive setting. They showed that in delicately constructed examples, not everyone can obtain a utility of at least her maximin value. They mitigated this impossibility result with a beautiful observation: no matter how the utility functions are made, we always can allocate the items to the agents to guarantee each agent’s utility is at least 2/3 of her maximin value. They left open whether this bound can be improved. Our main contribution answers this question in the affirmative. We improve their approximation result to a 3/4 factor guarantee.


Sign in / Sign up

Export Citation Format

Share Document