Conceptual and Practical Implications for Rehabilitation Research: Effect Size Estimates, Confidence Intervals, and Power

2007 ◽  
Vol 21 (2) ◽  
pp. 87-100 ◽  
Author(s):  
James M. Ferrin ◽  
Malachy Bishop ◽  
Timothy N. Tansey ◽  
Michael Frain ◽  
Elizabeth A. Swett ◽  
...  
2005 ◽  
Vol 35 (1) ◽  
pp. 1-20 ◽  
Author(s):  
G. K. Huysamen

Criticisms of traditional null hypothesis significance testing (NHST) became more pronounced during the 1960s and reached a climax during the past decade. Among others, NHST says nothing about the size of the population parameter of interest and its result is influenced by sample size. Estimation of confidence intervals around point estimates of the relevant parameters, model fitting and Bayesian statistics represent some major departures from conventional NHST. Testing non-nil null hypotheses, determining optimal sample size to uncover only substantively meaningful effect sizes and reporting effect-size estimates may be regarded as minor extensions of NHST. Although there seems to be growing support for the estimation of confidence intervals around point estimates of the relevant parameters, it is unlikely that NHST-based procedures will disappear in the near future. In the meantime, it is widely accepted that effect-size estimates should be reported as a mandatory adjunct to conventional NHST results.


2017 ◽  
Author(s):  
Gjalt - Jorn Ygram Peters ◽  
Rik Crutzen

Although basing conclusions on confidence intervals for effect size estimates is preferred over relying on null hypothesis significance testing alone, confidence intervals in psychology are typically very wide. One reason may be a lack of easily applicable methods for planning studies to achieve sufficiently tight confidence intervals. This paper presents tables and freely accessible tools to facilitate planning studies for the desired accuracy in parameter estimation for a common effect size (Cohen’s d). In addition, the importance of such accuracy is demonstrated using data from the Reproducibility Project: Psychology (RPP). It is shown that the sampling distribution of Cohen’s d is very wide unless sample sizes are considerably larger than what is common in psychology studies. This means that effect size estimates can vary substantially from sample to sample, even with perfect replications. The RPP replications’ confidence intervals for Cohen’s d have widths of around 1 standard deviation (95% confidence interval from 1.05 to 1.39). Therefore, point estimates obtained in replications are likely to vary substantially from the estimates from earlier studies. The implication is that researchers in psychology -and funders- will have to get used to conducting considerably larger studies if they are to build a strong evidence base.


2021 ◽  
pp. 152483802110216
Author(s):  
Brooke N. Lombardi ◽  
Todd M. Jensen ◽  
Anna B. Parisi ◽  
Melissa Jenkins ◽  
Sarah E. Bledsoe

Background: The association between a lifetime history of sexual victimization and the well-being of women during the perinatal period has received increasing attention. However, research investigating this relationship has yet to be systematically reviewed or quantitatively synthesized. Aim: This systematic review and meta-analysis aims to calculate the pooled effect size estimate of the statistical association between a lifetime history of sexual victimization and perinatal depression (PND). Method: Four bibliographic databases were systematically searched, and reference harvesting was conducted to identify peer-reviewed articles that empirically examined associations between a lifetime history of sexual victimization and PND. A random effects model was used to ascertain an overall pooled effect size estimate in the form of an odds ratio and corresponding 95% confidence intervals (CIs). Subgroup analyses were also conducted to assess whether particular study features and sample characteristic (e.g., race and ethnicity) influenced the magnitude of effect size estimates. Results: This review included 36 studies, with 45 effect size estimates available for meta-analysis. Women with a lifetime history of sexual victimization had 51% greater odds of experiencing PND relative to women with no history of sexual victimization ( OR = 1.51, 95% CI [1.35, 1.67]). Effect size estimates varied considerably according to the PND instrument used in each study and the racial/ethnic composition of each sample. Conclusion: Findings provide compelling evidence for an association between a lifetime history of sexual victimization and PND. Future research should focus on screening practices and interventions that identify and support survivors of sexual victimization perinatally.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592199203
Author(s):  
Don van den Bergh ◽  
Julia M. Haaf ◽  
Alexander Ly ◽  
Jeffrey N. Rouder ◽  
Eric-Jan Wagenmakers

An increasingly popular approach to statistical inference is to focus on the estimation of effect size. Yet this approach is implicitly based on the assumption that there is an effect while ignoring the null hypothesis that the effect is absent. We demonstrate how this common null-hypothesis neglect may result in effect size estimates that are overly optimistic. As an alternative to the current approach, a spike-and-slab model explicitly incorporates the plausibility of the null hypothesis into the estimation process. We illustrate the implications of this approach and provide an empirical example.


2012 ◽  
Vol 41 (5) ◽  
pp. 1376-1382 ◽  
Author(s):  
Gisela Orozco ◽  
John PA Ioannidis ◽  
Andrew Morris ◽  
Eleftheria Zeggini ◽  

2013 ◽  
Vol 82 (3) ◽  
pp. 358-374 ◽  
Author(s):  
Maaike Ugille ◽  
Mariola Moeyaert ◽  
S. Natasha Beretvas ◽  
John M. Ferron ◽  
Wim Van den Noortgate

Circulation ◽  
2007 ◽  
Vol 116 (suppl_16) ◽  
Author(s):  
George A Diamond ◽  
Sanjay Kaul

Background A highly publicized meta-analysis of 42 clinical trials comprising 27,844 diabetics ignited a firestorm of controversy by charging that treatment with rosiglitazone was associated with a “…worrisome…” 43% greater risk of myocardial infarction ( p =0.03) and a 64% greater risk of cardiovascular death ( p =0.06). Objective The investigators excluded 4 trials from the infarction analysis and 19 trials from the mortality analysis in which no events were observed. We sought to determine if these exclusions biased the results. Methods We compared the index study to a Bayesian meta-analysis of the entire 42 trials (using odds ratio as the measure of effect size) and to fixed-effects and random-effects analyses with and without a continuity correction that adjusts for values of zero. Results The odds ratios and confidence intervals for the analyses are summarized in the Table . Odds ratios for infarction ranged from 1.43 to 1.22 and for death from 1.64 to 1.13. Corrected models resulted in substantially smaller odds ratios and narrower confidence intervals than did uncorrected models. Although corrected risks remain elevated, none are statistically significant (*p<0.05). Conclusions Given the fragility of the effect sizes and confidence intervals, the charge that roziglitazone increases the risk of adverse events is not supported by these additional analyses. The exaggerated values observed in the index study are likely the result of excluding the zero-event trials from analysis. Continuity adjustments mitigate this error and provide more consistent and reliable assessments of true effect size. Transparent sensitivity analyses should therefore be performed over a realistic range of the operative assumptions to verify the stability of such assessments especially when outcome events are rare. Given the relatively wide confidence intervals, additional data will be required to adjudicate these inconclusive results.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.


2005 ◽  
Vol 62 (12) ◽  
pp. 2716-2726 ◽  
Author(s):  
Michael J Bradford ◽  
Josh Korman ◽  
Paul S Higgins

There is considerable uncertainty about the effectiveness of fish habitat restoration programs, and reliable monitoring programs are needed to evaluate them. Statistical power analysis based on traditional hypothesis tests are usually used for monitoring program design, but here we argue that effect size estimates and their associated confidence intervals are more informative because results can be compared with both the null hypothesis of no effect and effect sizes of interest, such as restoration goals. We used a stochastic simulation model to compare alternative monitoring strategies for a habitat alteration that would change the productivity and capacity of a coho salmon (Oncorhynchus kisutch) producing stream. Estimates of the effect size using a freshwater stock–recruit model were more precise than those from monitoring the abundance of either spawners or smolts. Less than ideal monitoring programs can produce ambiguous results, which are cases in which the confidence interval includes both the null hypothesis and the effect size of interest. Our model is a useful planning tool because it allows the evaluation of the utility of different types of monitoring data, which should stimulate discussion on how the results will ultimately inform decision-making.


Sign in / Sign up

Export Citation Format

Share Document