scholarly journals Much ado about something: a response to “COVID-19: underpowered randomised trials, or no randomised trials?”

Trials ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Noah A. Haber ◽  
Sarah E. Wieten ◽  
Emily R. Smith ◽  
David Nunan

AbstractNon-pharmaceutical interventions (NPI) for infectious diseases such as COVID-19 are particularly challenging given the complexities of what is both practical and ethical to randomize. We are often faced with the difficult decision between having weak trials or not having a trial at all. In a recent article, Dr. Atle Fretheim argues that statistically underpowered studies are still valuable, particularly in conjunction with other similar studies in meta-analysis in the context of the DANMASK-19 trial, asking “Surely, some trial evidence must be better than no trial evidence?” However, informative trials are not always feasible, and feasible trials are not always informative. In some cases, even a well-conducted but weakly designed and/or underpowered trial such as DANMASK-19 may be uninformative or worse, both individually and in a body of literature. Meta-analysis, for example, can only resolve issues of statistical power if there is a reasonable expectation of compatible well-designed trials. Uninformative designs may also invite misinformation. Here, we make the case that—when considering informativeness, ethics, and opportunity costs in addition to statistical power—“nothing” is often the better choice.

Author(s):  
Luke I. Rowe ◽  
John Hattie ◽  
Robert Hester

AbstractCollective intelligence (CI) is said to manifest in a group’s domain general mental ability. It can be measured across a battery of group IQ tests and statistically reduced to a latent factor called the “c-factor.” Advocates have found the c-factor predicts group performance better than individual IQ. We test this claim by meta-analyzing correlations between the c-factor and nine group performance criterion tasks generated by eight independent samples (N = 857 groups). Results indicated a moderate correlation, r, of .26 (95% CI .10, .40). All but four studies comprising five independent samples (N = 366 groups) failed to control for the intelligence of individual members using individual IQ scores or their statistically reduced equivalent (i.e., the g-factor). A meta-analysis of this subset of studies found the average IQ of the groups’ members had little to no correlation with group performance (r = .06, 95% CI −.08, .20). Around 80% of studies did not have enough statistical power to reliably detect correlations between the primary predictor variables and the criterion tasks. Though some of our findings are consistent with claims that a general factor of group performance may exist and relate positively to group performance, limitations suggest alternative explanations cannot be dismissed. We caution against prematurely embracing notions of the c-factor unless it can be independently and robustly replicated and demonstrated to be incrementally valid beyond the g-factor in group performance contexts.


2014 ◽  
Vol 13 (3) ◽  
pp. 123-133 ◽  
Author(s):  
Wiebke Goertz ◽  
Ute R. Hülsheger ◽  
Günter W. Maier

General mental ability (GMA) has long been considered one of the best predictors of training success and considerably better than specific cognitive abilities (SCAs). Recently, however, researchers have provided evidence that SCAs may be of similar importance for training success, a finding supporting personnel selection based on job-related requirements. The present meta-analysis therefore seeks to assess validities of SCAs for training success in various occupations in a sample of German primary studies. Our meta-analysis (k = 72) revealed operational validities between ρ = .18 and ρ = .26 for different SCAs. Furthermore, results varied by occupational category, supporting a job-specific benefit of SCAs.


2020 ◽  
Vol 228 (1) ◽  
pp. 43-49 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. Currently, dedicated graphical displays to depict study-level statistical power in the context of meta-analysis are unavailable. Here, we introduce the sunset (power-enhanced) funnel plot to visualize this relevant information for assessing the credibility, or evidential value, of a set of studies. The sunset funnel plot highlights the statistical power of primary studies to detect an underlying true effect of interest in the well-known funnel display with color-coded power regions and a second power axis. This graphical display allows meta-analysts to incorporate power considerations into classic funnel plot assessments of small-study effects. Nominally significant, but low-powered, studies might be seen as less credible and as more likely being affected by selective reporting. We exemplify the application of the sunset funnel plot with two published meta-analyses from medicine and psychology. Software to create this variation of the funnel plot is provided via a tailored R function. In conclusion, the sunset (power-enhanced) funnel plot is a novel and useful graphical display to critically examine and to present study-level power in the context of meta-analysis.


2014 ◽  
Vol 45 (3) ◽  
pp. 239-245 ◽  
Author(s):  
Robert J. Calin-Jageman ◽  
Tracy L. Caldwell

A recent series of experiments suggests that fostering superstitions can substantially improve performance on a variety of motor and cognitive tasks ( Damisch, Stoberock, & Mussweiler, 2010 ). We conducted two high-powered and precise replications of one of these experiments, examining if telling participants they had a lucky golf ball could improve their performance on a 10-shot golf task relative to controls. We found that the effect of superstition on performance is elusive: Participants told they had a lucky ball performed almost identically to controls. Our failure to replicate the target study was not due to lack of impact, lack of statistical power, differences in task difficulty, nor differences in participant belief in luck. A meta-analysis indicates significant heterogeneity in the effect of superstition on performance. This could be due to an unknown moderator, but no effect was observed among the studies with the strongest research designs (e.g., high power, a priori sampling plan).


2006 ◽  
Author(s):  
Guy Cafri ◽  
Michael T. Brannick ◽  
Jeffrey Kromrey

Sign in / Sign up

Export Citation Format

Share Document