scholarly journals PROPm Allocations of Indivisible Goods to Multiple Agents

Author(s):  
Artem Baklanov ◽  
Pranav Garimidi ◽  
Vasilis Gkatzelis ◽  
Daniel Schoepflin

We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.

2017 ◽  
Vol 13 (2) ◽  
pp. 203-260 ◽  
Author(s):  
Danielle Barth ◽  
Vsevolod Kapatsinski

AbstractThe present paper presents a multimodel inference approach to linguistic variation, expanding on prior work by Kuperman and Bresnan (2012). We argue that corpus data often present the analyst with high model selection uncertainty. This uncertainty is inevitable given that language is highly redundant: every feature is predictable from multiple other features. However, uncertainty involved in model selection is ignored by the standard method of selecting the single best model and inferring the effects of the predictors under the assumption that the best model is true. Multimodel inference avoids committing to a single model. Rather, we make predictions based on the entire set of plausible models, with contributions of models weighted by the models' predictive value. We argue that multimodel inference is superior to model selection for both the I-Language goal of inferring the mental grammars that generated the corpus, and the E-Language goal of predicting characteristics of future speech samples from the community represented by the corpus. Applying multimodel inference to the classic problem of English auxiliary contraction, we show that the choice between multimodel inference and model selection matters in practice: the best model may contain predictors that are not significant when the full set of plausible models is considered, and may omit predictors that are significant considering the full set of models. We also contribute to the study of English auxiliary contraction. We document the effects of priming, contextual predictability, and specific syntactic constructions and provide evidence against effects of phonological context.


Author(s):  
Rupert Freeman ◽  
Sujoy Sikdar ◽  
Rohit Vaish ◽  
Lirong Xia

In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While prior work has studied (approximate) equitability in isolation, we consider equitability in conjunction with other well-studied notions of fairness and economic efficiency. We show that the Leximin algorithm produces an allocation that satisfies equitability up to any good and Pareto optimality. We also give a novel algorithm that guarantees Pareto optimality and equitability up to one good in pseudopolynomial time.  Our experiments on real-world preference data reveal that approximate envy-freeness, approximate equitability, and Pareto optimality can often be achieved simultaneously.


Author(s):  
Siddharth Barman ◽  
Sanath Kumar Krishnamurthy

We study Fisher markets that admit equilibria wherein each good is integrally assigned to some agent. While strong existence and computational guarantees are known for equilibria of Fisher markets with additive valuations (Eisenberg and Gale 1959; Orlin 2010), such equilibria, in general, assign goods fractionally to agents. Hence, Fisher markets are not directly applicable in the context of indivisible goods. In this work we show that one can always bypass this hurdle and, up to a bounded change in agents’ budgets, obtain markets that admit an integral equilibrium. We refer to such markets as pure markets and show that, for any given Fisher market (with additive valuations), one can efficiently compute a “near-by,” pure market with an accompanying integral equilibrium.Our work on pure markets leads to novel algorithmic results for fair division of indivisible goods. Prior work in discrete fair division has shown that, under additive valuations, there always exist allocations that simultaneously achieve the seemingly incompatible properties of fairness and efficiency (Caragiannis et al. 2016); here fairness refers to envyfreeness up to one good (EF1) and efficiency corresponds to Pareto efficiency. However, polynomial-time algorithms are not known for finding such allocations. Considering relaxations of proportionality and EF1, respectively, as our notions of fairness, we show that fair and Pareto efficient allocations can be computed in strongly polynomial time.


Author(s):  
Anton Andreychuk ◽  
Konstantin Yakovlev ◽  
Dor Atzmon ◽  
Roni Stern

Multi-Agent Pathfinding (MAPF) is the problem of finding paths for multiple agents such that every agent reaches its goal and the agents do not collide. Most prior work on MAPF were on grids, assumed agents' actions have uniform duration, and that time is discretized into timesteps. In this work, we propose a MAPF algorithm that do not assume any of these assumptions, is complete, and provides provably optimal solutions. This algorithm is based on a novel combination of Safe Interval Path Planning (SIPP), a continuous time single agent planning algorithms, and Conflict-Based Search (CBS). We analyze this algorithm, discuss its pros and cons, and evaluate it experimentally on several standard benchmarks.


2009 ◽  
Vol 35 ◽  
pp. 161-191 ◽  
Author(s):  
V. Conitzer

Voting is a general method for aggregating the preferences of multiple agents. Each agent ranks all the possible alternatives, and based on this, an aggregate ranking of the alternatives (or at least a winning alternative) is produced. However, when there are many alternatives, it is impractical to simply ask agents to report their complete preferences. Rather, the agents' preferences, or at least the relevant parts thereof, need to be elicited. This is done by asking the agents a (hopefully small) number of simple queries about their preferences, such as comparison queries, which ask an agent to compare two of the alternatives. Prior work on preference elicitation in voting has focused on the case of unrestricted preferences. It has been shown that in this setting, it is sometimes necessary to ask each agent (almost) as many queries as would be required to determine an arbitrary ranking of the alternatives. In contrast, in this paper, we focus on single-peaked preferences. We show that such preferences can be elicited using only a linear number of comparison queries, if either the order with respect to which preferences are single-peaked is known, or at least one other agent's complete preferences are known. We show that using a sublinear number of queries does not suffice. We also consider the case of cardinally single-peaked preferences. For this case, we show that if the alternatives' cardinal positions are known, then an agent's preferences can be elicited using only a logarithmic number of queries; however, we also show that if the cardinal positions are not known, then a sublinear number of queries does not suffice. We present experimental results for all elicitation algorithms. We also consider the problem of only eliciting enough information to determine the aggregate ranking, and show that even for this more modest objective, a sublinear number of queries per agent does not suffice for known ordinal or unknown cardinal positions. Finally, we discuss whether and how these techniques can be applied when preferences are almost single-peaked.


Author(s):  
Georgios Amanatidis ◽  
Georgios Birmpas ◽  
Aris Filos-Ratsikas ◽  
Alexandros Hollender ◽  
Alexandros A. Voudouris

We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and envy-freeness up to any good (EFX). We establish that an MNW allocation is always EFX as long as there are at most two possible values for the goods, whereas this implication is no longer true for three or more distinct values. As a notable consequence, this proves the existence of EFX allocations for these restricted valuation functions. While the efficient computation of an MNW allocation for two possible values remains an open problem, we present a novel algorithm for directly constructing EFX allocations in this setting. Finally, we study the question of whether an MNW allocation implies any EFX guarantee for general additive valuation functions under a natural new interpretation of approximate EFX allocations.


2018 ◽  
Author(s):  
◽  
Bo Song

The enormous size and fast-evolving nature of World-Wide-Web has been demanding an even more efficient PageRank updating algorithm. Web evolution may involve two kinds: (1) link structure modification; (2) page insertion/deletion. When the web evolution is restricted to only link insertion/deletion, we demonstrate the benefit of using the previous PageRank to initialize the current PageRank computation, theoretically and experimentally. When page insertion/deletion occurs, how to effectively use the previous PageRank information to facilitate the current PageRank computation has long been a challenge. To tackle the general case, a so-called "virtual web" is introduced through adding the inserted nodes to the previous web along with some specific "in-home" link structure, where in-links from the previous web and out-links to the previous web are excluded. Through the virtual web, we are able to work out a virtual initialization, which can be efficiently used to calculate the current PageRank. The introduced virtual initialization is "unbiased", that assumes least under available knowledge. The virtual web is then integrated with the Power-Iteration and Gauss-Southwell method to solve the node insertion/deletion problem, which are named as Virtual Web Power-Iteration (VWPI) method and Virtual Web Gauss-Southwell (VWGS) method, respectively. Further, we proposed an optimized approach based on VWGS method for updating node insertions. The experiment result shows that the VWGS algorithm significantly outperformed the conventional PageRank computation based on the original model. On the dataset Twitter-2010 with 42M nodes and 1.5B edges, for a perturbation of 400k node and 14 million link insertions plus deletions at one time, our algorithm is about 20 times faster on number of iterations and 3 times faster on running-time in comparison to the Gauss-Southwell method starting from scratch. On the soc-LiveJournal dataset with up to a 20% node insertion, the optimized VWGS method received another 28% gain comparing to the original VWGS method. To compare with the prior work proposed by Ohsaka et al. in [32], our method is 1800x faster per link insertion/deletion on the Twitter-2010 dataset under similar experiment environment.


2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


2017 ◽  
Author(s):  
Belinda Slakman ◽  
Richard West

<div> <div> <div> <p>This article reviews prior work studying reaction kinetics in solution, with the goal of using this information to improve detailed kinetic modeling in the solvent phase. Both experimental and computational methods for calculating reaction rates in liquids are reviewed. Previous studies, which used such methods to determine solvent effects, are then analyzed based on reaction family. Many of these studies correlate kinetic solvent effect with one or more solvent parameters or properties of reacting species, but it is not always possible, and investigations are usually done on too few reactions and solvents to truly generalize. From these studies, we present suggestions on how best to use data to generalize solvent effects for many different reaction types in a high throughput manner. </p> </div> </div> </div>


Author(s):  
Jangbae Jeon

Abstract This work presents a novel method of continuous improvement for faster, better and cheaper TEM sample preparation using Cut Look and Measure (CLM). The improvement of the process is executed by operational monitoring of daily beam conditions, end products, bulk thickness control, recipe usage and tool running time. This process produces a consequent decrease in rework rate and process time. In addition, it also increases throughput with better quality TEM samples.


Sign in / Sign up

Export Citation Format

Share Document