scholarly journals A Simple and Approximately Optimal Mechanism for a Buyer with Complements

2020 ◽  
Author(s):  
Alon Eden ◽  
Michal Feldman ◽  
Ophir Friedler ◽  
Inbal Talgam-Cohen ◽  
S. Matthew Weinberg

Recent literature on approximately optimal revenue maximization has shown that in settings where agent valuations for items are complement free, the better of selling the items separately and bundling them together guarantees a constant fraction of the optimal revenue. However, most real-world settings involve some degree of complementarity among items. The role that complementarity plays in the trade-off of simplicity versus optimality has been an obvious missing piece of the puzzle. In “A Simple and Approximately Optimal Mechanism for a Buyer with Complements,” the authors show that the same simple selling mechanism—the better of selling separately and as a grand bundle—guarantees a $\Theta(d)$ fraction of the optimal revenue, where $d$ is a measure of the degree of complementarity. One key modeling contribution is a tractable notion of “degree of complementarity” that admits meaningful results and insights—they demonstrate that previous definitions fall short in this regard.

2015 ◽  
Vol 31 ◽  
pp. 23 ◽  
Author(s):  
Evelyn Sample ◽  
Marije Michel

Studying task repetition for adult and young foreign language learners of English (EFL) has received growing interest in recent literature within the task-based approach (Bygate, 2009; Hawkes, 2012; Mackey, Kanganas, & Oliver, 2007; Pinter, 2007b). Earlier work suggests that second language (L2) learners benefit from repeating the same or a slightly different task. Task repetition has been shown to enhance fluency and may also add to complexity or accuracy of production. However, few investigations have taken a closer look at the underlying relationships between the three dimensions of task performance: complexity, accuracy, and fluency (CAF). Using Skehan’s (2009) trade-off hypothesis as an explanatory framework, our study aims to fill this gap by investigating interactions among CAF measures. We report on the repeated performances on an oral spot- the-difference task by six 9-year-old EFL learners. Mirroring earlier work, our data reveal significant increases of fluency through task repetition. Correlational analyses show that initial performances that benefit in one dimension come at the expense of another; by the third performance, however, trade-off effects disappear. Further qualitative explanations support our interpretation that with growing task-familiarity students are able to focus their attention on all three CAF dimensions simultaneously.Au sein de la littérature relative à l’approche fondée sur les tâches, on évoque de plus en plus d’études portant sur la répétition des tâches pour l’enseignement de l’anglais langue étrangère aux jeunes et aux adultes (Bygate, 2009; Hawkes, 2012; Mackey, Kanganas, & Oliver, 2007; Pinter, 2007b). Des études antérieures semblent indiquer que les apprenants en L2 profitent de la répétition de la même tâche ou d’une tâche légèrement différente. Il a été démontré que la répétition des tâches améliore la fluidité et qu’elle pourrait augmenter la complexité ou la précision de la production. Toutefois, peu d’études se sont penchées davantage sur les relations sous-jacentes entre les trois dimensions de l’exécution des tâches : la complexité, la précision et la fluidité. S’appuyant sur l’hypothèse du compromis de Skehan (2009) comme cadre explicatif, notre étude vise à combler cette lacune en examinant les interactions entre les mesures de ces trois éléments. Nous faisons rapport du rendement de six jeunes âgés de 9 ans qui apprennent l’anglais comme langue étrangère alors qu’ils répètent une tâche impliquant l’identification de différences. Nos données reproduisent les résultats de travaux antérieurs en ce qu’elles révèlent une amélioration significative de la fluidité par la répétition de tâches. Des analyses corrélationnelles indiquent que l’amélioration d’une dimension lors des exécutions initiales se fait aux dépens d’une autre; cet effet de compromis disparait, toutefois, à la troisième exécution. Des explications quali- tatives supplémentaires viennent appuyer notre interprétation selon laquelle la familiarité croissante que ressentent les élèves avec une tâche leur permet de se concentrer sur les trois dimensions (complexité, précision et fluidité) à la fois.


Author(s):  
Julian Berk ◽  
Sunil Gupta ◽  
Santu Rana ◽  
Svetha Venkatesh

In order to improve the performance of Bayesian optimisation, we develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function. This is done by sampling the exploration-exploitation trade-off parameter from a distribution. We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret. We also provide results showing that our method achieves better performance than GP-UCB in a range of real-world and synthetic problems.


2010 ◽  
Vol 3 (2) ◽  
pp. 1-8
Author(s):  
David Antoni ◽  
Freddy Leal

Regulations are often imposed in order to correct any failures in the market, whether the failure is a result of the functioning of a market or the behaviour of a government. However, every regulatory intervention br ings up a question: How ethical is the regulation? Even if a regulatory intervention could achieve more effici ency or more equity, it may not mean that it is ethi cal. The concept of ethics is ne cessarily subjective, it is based on the morals and standards of a society. Yet even though a society may be concerned about ethics, the issues of equity and altrui sm matter as does the way in which firms produce and seek to rationally an d efficiently maximize profit. Defining ethics is a difficul t issue, and defining ethical regu lation is even more difficult. Any form of regulation is a tool for interv ention used to balanc e the trade-off between efficiency and equity to create harmony between a market or economy and the society it functions within. In an ideal world, any go vernment intervention implemented would be for the greater benefit of all. However, this does not always happen in the vicissitudes of the real world when governments regulate an d intervene in markets, which are, in turn, based on the principle of rational self-interest and efficiency. In this paper we discuss the role of society in market regu lation. The discussion will focus on the importance of society on ethics and therefore on what constitutes ethical regulations. In fact we argue that equity, effi ciency or even failures are not the main factors to consider when regulating. It is society that defines ethics and how society understands ethics influences the regulatory environment


Author(s):  
Artur Gorokh ◽  
Siddhartha Banerjee ◽  
Krishnamurthy Iyer

Nonmonetary mechanisms for repeated allocation and decision making are gaining widespread use in many real-world settings. Our aim in this work is to study the performance and incentive properties of simple mechanisms based on artificial currencies in such settings. To this end, we make the following contributions: For a general allocation setting, we provide two black-box approaches to convert any one-shot monetary mechanism to a dynamic nonmonetary mechanism using an artificial currency that simultaneously guarantees vanishing gains from nontruthful reporting over time and vanishing losses in performance. The two mechanisms trade off between their applicability and their computational and informational requirements. Furthermore, for settings with two agents, we show that a particular artificial currency mechanism also results in a vanishing price of anarchy.


2020 ◽  
Vol 110 (4) ◽  
pp. 1206-1230 ◽  
Author(s):  
Abhijit V. Banerjee ◽  
Sylvain Chassang ◽  
Sergio Montero ◽  
Erik Snowberg

This paper studies the problem of experiment design by an ambiguity-averse decision-maker who trades off subjective expected performance against robust performance guarantees. This framework accounts for real-world experimenters’ preference for randomization. It also clarifies the circumstances in which randomization is optimal: when the available sample size is large and robustness is an important concern. We apply our model to shed light on the practice of rerandomization, used to improve balance across treatment and control groups. We show that rerandomization creates a trade-off between subjective performance and robust performance guarantees. However, robust performance guarantees diminish very slowly with the number of rerandomizations. This suggests that moderate levels of rerandomization usefully expand the set of acceptable compromises between subjective performance and robustness. Targeting a fixed quantile of balance is safer than targeting an absolute balance objective. (JEL C90, D81)


2016 ◽  
Vol 56 ◽  
pp. 119-152 ◽  
Author(s):  
Javad Azimi ◽  
Xiaoli Fern ◽  
Alan Fern

Motivated by a real-world problem, we study a novel budgeted optimization problem where the goal is to optimize an unknown function f(.) given a budget by requesting a sequence of samples from the function. In our setting, however, evaluating the function at precisely specified points is not practically possible due to prohibitive costs. Instead, we can only request constrained experiments. A constrained experiment, denoted by Q, specifies a subset of the input space for the experimenter to sample the function from. The outcome of Q includes a sampled experiment x, and its function output f(x). Importantly, as the constraints of Q become looser, the cost of fulfilling the request decreases, but the uncertainty about the location x increases. Our goal is to manage this trade-off by selecting a set of constrained experiments that best optimize f(.) within the budget. We study this problem in two different settings, the non-sequential (or batch) setting where a set of constrained experiments is selected at once, and the sequential setting where experiments are selected one at a time. We evaluate our proposed methods for both settings using synthetic and real functions. The experimental results demonstrate the efficacy of the proposed methods.


2020 ◽  
Vol 15 (3) ◽  
pp. 1221-1278 ◽  
Author(s):  
Mariagiovanna Baccara ◽  
SangMok Lee ◽  
Leeat Yariv

We study a dynamic matching environment where individuals arrive sequentially. There is a trade‐off between waiting for a thicker market, allowing for higher‐quality matches, and minimizing agents' waiting costs. The optimal mechanism cumulates a stock of incongruent pairs up to a threshold and matches all others in an assortative fashion instantaneously. In discretionary settings, a similar protocol ensues in equilibrium, but expected queues are inefficiently long. We quantify the welfare gain from centralization, which can be substantial, even for low waiting costs. We also evaluate welfare improvements generated by alternative priority protocols.


Author(s):  
Oluwaseyi Feyisetan ◽  
Abhinav Aggarwal ◽  
Zekun Xu ◽  
Nathanael Teissier

Accurately learning from user data while ensuring quantifiable privacy guarantees provides an opportunity to build better ML models while maintaining user trust. Recent literature has demonstrated the applicability of a generalized form of Differential Privacy to provide guarantees over text queries. Such mechanisms add privacy preserving noise to vectorial representations of text in high dimension and return a text based projection of the noisy vectors. However, these mechanisms are sub-optimal in their trade-off between privacy and utility. In this proposal paper, we describe some challenges in balancing this trade-off. At a high level, we provide two proposals: (1) a framework called LAC which defers some of the noise to a privacy amplification step and (2), an additional suite of three different techniques for calibrating the noise based on the local region around a word. Our objective in this paper is not to evaluate a single solution but to further the conversation on these challenges and chart pathways for building better mechanisms.


Biosensors ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. 100 ◽  
Author(s):  
Ekaterina Martynko ◽  
Dmitry Kirsanov

The field of biosensing is rapidly developing, and the number of novel sensor architectures and different sensing elements is growing fast. One of the most important features of all biosensors is their very high selectivity stemming from the use of bioreceptor recognition elements. The typical calibration of a biosensor requires simple univariate regression to relate a response value with an analyte concentration. Nevertheless, dealing with complex real-world sample matrices may sometimes lead to undesired interference effects from various components. This is where chemometric tools can do a good job in extracting relevant information, improving selectivity, circumventing a non-linearity in a response. This brief review aims to discuss the motivation for the application of chemometric tools in biosensing and provide some examples of such applications from the recent literature.


Sign in / Sign up

Export Citation Format

Share Document