Preserving Bidder Privacy in Assignment Auctions: Design and Measurement

2020 ◽  
Vol 66 (7) ◽  
pp. 3162-3182
Author(s):  
De Liu ◽  
Adib Bagh

Motivated by bidders’ interests in concealing their private information in auctions, we propose an ascending clock auction for unit-demand assignment problems that economizes on bidder information revelation, together with a new general-purpose measure of information revelation. Our auction uses an iterative partial reporting design such that for a given set of prices, not all bidders are required to report their demands, and when they are, they reveal a single preferred item at a time instead of all. Our design can better preserve bidder privacy while maintaining several good properties: sincere bidding is an ex post Nash equilibrium, ending prices are path independent, and efficiency is achieved if the auction starts with the auctioneer’s reservation values. Our measurement of information revelation is based on Shannon’s entropy and can be used to compare a wide variety of auction and nonauction mechanisms. We propose a hybrid quasi–Monte Carlo procedure for computing this measure. Our numerical simulations show that our auction consistently outperforms a full-reporting benchmark with up to 18% less entropy reduction and scales to problems of over 100,000 variables. This paper was accepted by Chris Forman, information systems.

Author(s):  
Garrett Wood

Belligerents could in principle avoid the ex post costs of conflict by revealing all private information about their violent capabilities and then calculating odds of success ex ante. Incentives to misrepresent private information for strategic gain, however, can cause miscalculations that lead to war. I argue some private information can lead to miscalculation not because it is purposefully misrepresented for strategic gain but because it is too decentralized to be easily revealed. The decentralized private information that produces improvised weapons requires a process of discovering suitable local resources and battlefield testing driven by local military entrepreneurs which frustrates information revelation. Decentralized private information used to improvise new weapons and capabilities like those which emerged in Afghanistan and Iraq show that it can take many years, decades, or even an indeterminate amount of time for fighting to reveal relevant information about violent capabilities.


2021 ◽  
pp. 107962
Author(s):  
Julio Almansa ◽  
Francesc Salvat-Pujol ◽  
Gloria Díaz-Londoño ◽  
Artur Carnicer ◽  
Antonio M. Lallena ◽  
...  

Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2328
Author(s):  
Mohammed Alzubaidi ◽  
Kazi N. Hasan ◽  
Lasantha Meegahapola ◽  
Mir Toufikur Rahman

This paper presents a comparative analysis of six sampling techniques to identify an efficient and accurate sampling technique to be applied to probabilistic voltage stability assessment in large-scale power systems. In this study, six different sampling techniques are investigated and compared to each other in terms of their accuracy and efficiency, including Monte Carlo (MC), three versions of Quasi-Monte Carlo (QMC), i.e., Sobol, Halton, and Latin Hypercube, Markov Chain MC (MCMC), and importance sampling (IS) technique, to evaluate their suitability for application with probabilistic voltage stability analysis in large-scale uncertain power systems. The coefficient of determination (R2) and root mean square error (RMSE) are calculated to measure the accuracy and the efficiency of the sampling techniques compared to each other. All the six sampling techniques provide more than 99% accuracy by producing a large number of wind speed random samples (8760 samples). In terms of efficiency, on the other hand, the three versions of QMC are the most efficient sampling techniques, providing more than 96% accuracy with only a small number of generated samples (150 samples) compared to other techniques.


2020 ◽  
Vol 26 (3) ◽  
pp. 171-176
Author(s):  
Ilya M. Sobol ◽  
Boris V. Shukhman

AbstractA crude Monte Carlo (MC) method allows to calculate integrals over a d-dimensional cube. As the number N of integration nodes becomes large, the rate of probable error of the MC method decreases as {O(1/\sqrt{N})}. The use of quasi-random points instead of random points in the MC algorithm converts it to the quasi-Monte Carlo (QMC) method. The asymptotic error estimate of QMC integration of d-dimensional functions contains a multiplier {1/N}. However, the multiplier {(\ln N)^{d}} is also a part of the error estimate, which makes it virtually useless. We have proved that, in the general case, the QMC error estimate is not limited to the factor {1/N}. However, our numerical experiments show that using quasi-random points of Sobol sequences with {N=2^{m}} with natural m makes the integration error approximately proportional to {1/N}. In our numerical experiments, {d\leq 15}, and we used {N\leq 2^{40}} points generated by the SOBOLSEQ16384 code published in 2011. In this code, {d\leq 2^{14}} and {N\leq 2^{63}}.


Sign in / Sign up

Export Citation Format

Share Document