quantity of interest
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 16)

H-INDEX

7
(FIVE YEARS 3)

2021 ◽  
pp. 1-15
Author(s):  
Flavien Ganter

Abstract Forced-choice conjoint experiments have become a standard component of the experimental toolbox in political science and sociology. Yet the literature has largely overlooked the fact that conjoint experiments can be used for two distinct purposes: to uncover respondents’ multidimensional preferences, and to estimate the causal effects of some attributes on a profile’s selection probability in a multidimensional choice setting. This paper makes the argument that this distinction is both analytically and practically relevant, because the quantity of interest is contingent on the purpose of the study. The vast majority of social scientists relying on conjoint analyses, including most scholars interested in studying preferences, have adopted the average marginal component effect (AMCE) as their main quantity of interest. The paper shows that the AMCE is neither conceptually nor practically suited to explore respondents’ preferences. Not only is it essentially a causal quantity conceptually at odds with the goal of describing patterns of preferences, but it also does generally not identify preferences, mixing them with compositional effects unrelated to preferences. This paper proposes a novel estimand—the average component preference—designed to explore patterns of preferences, and it presents a method for estimating it.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 961
Author(s):  
Mijung Park ◽  
Margarita Vinaroz ◽  
Wittawat Jitkrittum

We developed a novel approximate Bayesian computation (ABC) framework, ABCDP, which produces differentially private (DP) and approximate posterior samples. Our framework takes advantage of the sparse vector technique (SVT), widely studied in the differential privacy literature. SVT incurs the privacy cost only when a condition (whether a quantity of interest is above/below a threshold) is met. If the condition is sparsely met during the repeated queries, SVT can drastically reduce the cumulative privacy loss, unlike the usual case where every query incurs the privacy loss. In ABC, the quantity of interest is the distance between observed and simulated data, and only when the distance is below a threshold can we take the corresponding prior sample as a posterior sample. Hence, applying SVT to ABC is an organic way to transform an ABC algorithm to a privacy-preserving variant with minimal modification, but yields the posterior samples with a high privacy level. We theoretically analyzed the interplay between the noise added for privacy and the accuracy of the posterior samples. We apply ABCDP to several data simulators and show the efficacy of the proposed framework.


2021 ◽  
Vol 47 (2) ◽  
pp. 1-33
Author(s):  
Devan Sohier ◽  
Pablo De Oliveira Castro ◽  
François Févotte ◽  
Bruno Lathuilière ◽  
Eric Petit ◽  
...  

Quantifying errors and losses due to the use of Floating-point (FP) calculations in industrial scientific computing codes is an important part of the Verification, Validation, and Uncertainty Quantification process. Stochastic Arithmetic is one way to model and estimate FP losses of accuracy, which scales well to large, industrial codes. It exists in different flavors, such as CESTAC or MCA, implemented in various tools such as CADNA, Verificarlo, or Verrou. These methodologies and tools are based on the idea that FP losses of accuracy can be modeled via randomness. Therefore, they share the same need to perform a statistical analysis of programs results to estimate the significance of the results. In this article, we propose a framework to perform a solid statistical analysis of Stochastic Arithmetic. This framework unifies all existing definitions of the number of significant digits (CESTAC and MCA), and also proposes a new quantity of interest: the number of digits contributing to the accuracy of the results. Sound confidence intervals are provided for all estimators, both in the case of normally distributed results, and in the general case. The use of this framework is demonstrated by two case studies of industrial codes: Europlexus and code_aster.


2021 ◽  
Vol 18 (1) ◽  
pp. 78-99
Author(s):  
Sulian Wang ◽  
Chen Wang

The present study aims to investigate the quality of quantile judgments on a quantity of interest that follows the lognormal distribution, which is skewed and bounded from below with a long right tail. We conduct controlled experiments in which subjects predict the losses from a future typhoon based on losses from past typhoons. Our experiments find underconfidence of the 50% prediction intervals, which is primarily driven by overestimation of the 75th percentiles. We further perform exploratory analyses to disentangle sampling errors and judgmental biases in the overall miscalibration. Finally, we show that the correlations of log-transformed judgments between subjects are smaller than is justified by the information overlapping structure. It leads to overconfident aggregate predictions using the Bayes rule if we treat the low correlations as an indicator for independent information.


2021 ◽  
Author(s):  
Jennifer Carpenter ◽  
Fangzhou Lu ◽  
Robert Whitelaw

2021 ◽  
Author(s):  
Jennifer N. Carpenter ◽  
Fangzhou Lu ◽  
Robert F. Whitelaw

2021 ◽  
pp. 1-1
Author(s):  
Zuqi Tang ◽  
Suyang Lou ◽  
Abdelkader Benabou ◽  
Emmanuel Creuse ◽  
Serge Nicaise ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document