scholarly journals Power Analysis for Conjoint Experiments

2020 ◽  
Author(s):  
Julian Schuessler ◽  
Markus Freitag

Conjoint experiments aiming to estimate average marginal component effects and related quantities have become a standard tool for social scientists. However, existing solutions for power analyses to find appropriate sample sizes for such studies have various shortcomings and accordingly, explicit sample size planning is rare. Based on recent advances in statistical inference for factorial experiments, we derive simple yet generally applicable formulae to calculate power and minimum required sample sizes for testing average marginal component effects (AMCEs), conditional AMCEs, as well as interaction effects in forced-choice conjoint experiments. The only input needed are expected effect sizes. Our approach only assumes random sampling of individuals or randomization of profiles and avoids any parametric assumption. Furthermore, we show that clustering standard errors on individuals is not necessary and does not affect power. Our results caution against designing conjoint experiments with small sample sizes, especially for detecting heterogeneity and interactions. We provide an R package that implements our approach.

2009 ◽  
Vol 31 (4) ◽  
pp. 500-506 ◽  
Author(s):  
Robert Slavin ◽  
Dewi Smith

Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of the Best Evidence Encyclopedia. As predicted, there was a significant negative correlation between sample size and effect size. The differences in effect sizes between small and large experiments were much greater than those between randomized and matched experiments. Explanations for the effects of sample size on effect size are discussed.


2014 ◽  
Vol 115 (1) ◽  
pp. 276-278 ◽  
Author(s):  
Derrick C. McLean ◽  
Benjamin R. Thomas

A wide literature of the unsuccessful treatment of writer's block has emerged since the early 1970's. Findings within this literature seem to confer generalizability of this procedure; however, small sample sizes may limit this interpretation. This meta-analysis independently analyzed effect sizes for “self-treatments” and “group-treatments” using number of words in the body of the publication as indication of a failure to treat writer's block. Results of the reported findings suggest that group-treatments tend to be slightly more unsuccessful than self-treatments.


2002 ◽  
Vol 95 (3) ◽  
pp. 837-842 ◽  
Author(s):  
M. T. Bradley ◽  
D. Smith ◽  
G. Stoica

A Monte-Carlo study was done with true effect sizes in deviation units ranging from 0 to 2 and a variety of sample sizes. The purpose was to assess the amount of bias created by considering only effect sizes that passed a statistical cut-off criterion of α = .05. The deviation values obtained at the .05 level jointly determined by the set effect sizes and sample sizes are presented. This table is useful when summarizing sets of studies to judge whether published results reflect an accurate appraisal of an underlying effect or a distorted estimate expected because significant studies are published and nonsignificant results are not. The table shows that the magnitudes of error are substantial with small sample sizes and inherently small effect sizes. Thus, reviews based on published literature could be misleading and especially so if true effect sizes were close to zero. A researcher should be particularly cautious of small sample sizes showing large effect sizes when larger samples indicate diminishing smaller effects.


2018 ◽  
Author(s):  
Christopher Chabris ◽  
Patrick Ryan Heck ◽  
Jaclyn Mandart ◽  
Daniel Jacob Benjamin ◽  
Daniel J. Simons

Williams and Bargh (2008) reported that holding a hot cup of coffee caused participants to judge a person’s personality as warmer, and that holding a therapeutic heat pad caused participants to choose rewards for other people rather than for themselves. These experiments featured large effects (r = .28 and .31), small sample sizes (41 and 53 participants), and barely statistically significant results. We attempted to replicate both experiments in field settings with more than triple the sample sizes (128 and 177) and double-blind procedures, but found near-zero effects (r = –.03 and .02). In both cases, Bayesian analyses suggest there is substantially more evidence for the null hypothesis of no effect than for the original physical warmth priming hypothesis.


2014 ◽  
Vol 17 (4) ◽  
Author(s):  
Raymond K. Walters ◽  
Charles Laurin ◽  
Gitta H. Lubke

Epistasis is a growing area of research in genome-wide studies, but the differences between alternative definitions of epistasis remain a source of confusion for many researchers. One problem is that models for epistasis are presented in a number of formats, some of which have difficult-to-interpret parameters. In addition, the relation between the different models is rarely explained. Existing software for testing epistatic interactions between single-nucleotide polymorphisms (SNPs) does not provide the flexibility to compare the available model parameterizations. For that reason we have developed an R package for investigating epistatic and penetrance models, EpiPen, to aid users who wish to easily compare, interpret, and utilize models for two-locus epistatic interactions. EpiPen facilitates research on SNP-SNP interactions by allowing the R user to easily convert between common parametric forms for two-locus interactions, generate data for simulation studies, and perform power analyses for the selected model with a continuous or dichotomous phenotype. The usefulness of the package for model interpretation and power analysis is illustrated using data on rheumatoid arthritis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2013 ◽  
Vol 113 (1) ◽  
pp. 221-224 ◽  
Author(s):  
David R. Johnson ◽  
Lauren K. Bachan

In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size ( n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.


Sign in / Sign up

Export Citation Format

Share Document