scholarly journals Accuracy and Precision of Fixed and Random Effects in Meta-Analyses of Randomized Control Trials for Continuous Outcomes

2022 ◽  
Author(s):  
Timo Gnambs ◽  
Ulrich Schroeders

Meta-analyses of treatment effects in randomized control trials are often faced with the problem of missing information required to calculate effect sizes and their sampling variances. Particularly, correlations between pre- and posttest scores are frequently not available. As an ad-hoc solution, researchers impute a constant value for the missing correlation. As an alternative, we propose adopting a multivariate meta-regression approach that models independent group effect sizes and accounts for the dependency structure using robust variance estimation or three-level modeling. A comprehensive simulation study mimicking realistic conditions of meta-analyses in clinical and educational psychology suggested that the prevalent imputation approach works well for estimating the pooled effect but severely distorts the between-study heterogeneity. In contrast, the robust meta-regression approach resulted in largely unbiased fixed and random effects. Based on these results recommendations for meta-analytic practice and future meta-analytic developments are provided.

2012 ◽  
Vol 9 (5) ◽  
pp. 610-620 ◽  
Author(s):  
Thomas A Trikalinos ◽  
Ingram Olkin

Background Many comparative studies report results at multiple time points. Such data are correlated because they pertain to the same patients, but are typically meta-analyzed as separate quantitative syntheses at each time point, ignoring the correlations between time points. Purpose To develop a meta-analytic approach that estimates treatment effects at successive time points and takes account of the stochastic dependencies of those effects. Methods We present both fixed and random effects methods for multivariate meta-analysis of effect sizes reported at multiple time points. We provide formulas for calculating the covariance (and correlations) of the effect sizes at successive time points for four common metrics (log odds ratio, log risk ratio, risk difference, and arcsine difference) based on data reported in the primary studies. We work through an example of a meta-analysis of 17 randomized trials of radiotherapy and chemotherapy versus radiotherapy alone for the postoperative treatment of patients with malignant gliomas, where in each trial survival is assessed at 6, 12, 18, and 24 months post randomization. We also provide software code for the main analyses described in the article. Results We discuss the estimation of fixed and random effects models and explore five options for the structure of the covariance matrix of the random effects. In the example, we compare separate (univariate) meta-analyses at each of the four time points with joint analyses across all four time points using the proposed methods. Although results of univariate and multivariate analyses are generally similar in the example, there are small differences in the magnitude of the effect sizes and the corresponding standard errors. We also discuss conditional multivariate analyses where one compares treatment effects at later time points given observed data at earlier time points. Limitations Simulation and empirical studies are needed to clarify the gains of multivariate analyses compared with separate meta-analyses under a variety of conditions. Conclusions Data reported at multiple time points are multivariate in nature and are efficiently analyzed using multivariate methods. The latter are an attractive alternative or complement to performing separate meta-analyses.


2021 ◽  
Vol 103 (3) ◽  
pp. 43-47
Author(s):  
David Steiner

Education leaders know that they should use research when choosing interventions for their schools, but they don’t always know how to read the research that is available. David Steiner explains some of the reasons that reading research is a low priority for educators on the front lines and offers some guidance for determining whether research results are meaningful without an extensive background in statistics. Ideally, education decision makers should look for randomized control trials with high effect sizes and low p-values.


1993 ◽  
Vol 5 (3) ◽  
pp. 182-193 ◽  
Author(s):  
Jane C. Ballantyne ◽  
Daniel B. Carr ◽  
Thomas C. Chalmers ◽  
Keith B.G. Dear ◽  
Italo F. Angelillo ◽  
...  

2020 ◽  
pp. 027112142093557 ◽  
Author(s):  
Li Luo ◽  
Brian Reichow ◽  
Patricia Snyder ◽  
Jennifer Harrington ◽  
Joy Polignano

Background: All children benefit from intentional interactions and instruction to become socially and emotionally competent. Over the past 30 years, evidence-based intervention tactics and strategies have been integrated to establish comprehensive, multitiered, or hierarchical systems of support frameworks to guide social–emotional interventions for young children. Objectives: To review systematically the efficacy of classroom-wide social–emotional interventions for improving the social, emotional, and behavioral outcomes of preschool children and to use meta-analytic techniques to identify critical study characteristics associated with obtained effect sizes. Method: Four electronic databases (i.e., Academic Search Premier, Educational Resource Information Center, PsycINFO, and Education Full Text) were systematically searched in December 2015 and updated in January 2018. “Snowball methods” were used to locate additional relevant studies. Effect size estimates were pooled using random-effects meta-analyses for three child outcomes, and moderator analyses were conducted. Results: Thirty-nine studies involving 10,646 child participants met the inclusion criteria and were included in this systematic review, with 33 studies included in the meta-analyses. Random-effects meta-analyses showed improvements in social competence ( g = 0.42, 95% confidence interval [CI] = [0.28, 0.56]) and emotional competence ( g = 0.33, 95% CI = [0.10, 0.56]), and decreases in challenging behavior ( g = −0.31, 95% CI = [−0.43, −0.19]). For social competence and challenging behavior, moderator analyses suggested interventions with a family component had statistically significant and larger effect sizes than those without a family component. Studies in which classroom teachers served as the intervention agent produced statistically significant but smaller effect sizes than when researchers or others implemented the intervention for challenging behavior. Conclusion: This systematic review and meta-analysis support using comprehensive social–emotional interventions for all children in a preschool classroom to improve their social–emotional competence and reduce challenging behavior.


Author(s):  
Nancy P. Kropf ◽  
Sherry M. Cummings

Chapter 6, “Problem-Solving Therapy: Evidence-Based Practice,” details the research evidence concerning the effectiveness of problem-solving therapy (PST) for use with older adults. Only meta-analyses or randomized control trials (RCT) were included in this review. One meta-analysis and fifteen randomized control trials were identified that investigated PST outcomes on older adult depression, health-related quality of life, and coping. Outcomes of these studies determined that this therapy is effective in reducing anxiety and depression, and increasing problem-solving abilities in both community-based and in-home settings. Additionally, consistent support was found for the efficacy of telephone and video-phone PST, suggesting that these alternate means of administration may help overcome barriers to the receipt of mental health services experienced by homebound elders.


2020 ◽  
Vol 52 (6) ◽  
pp. 2657-2673
Author(s):  
Xinru Li ◽  
Elise Dusseldorp ◽  
Xiaogang Su ◽  
Jacqueline J. Meulman

AbstractIn meta-analysis, heterogeneity often exists between studies. Knowledge about study features (i.e., moderators) that can explain the heterogeneity in effect sizes can be useful for researchers to assess the effectiveness of existing interventions and design new potentially effective interventions. When there are multiple moderators, they may amplify or attenuate each other’s effect on treatment effectiveness. However, in most meta-analysis studies, interaction effects are neglected due to the lack of appropriate methods. The method meta-CART was recently proposed to identify interactions between multiple moderators. The analysis result is a tree model in which the studies are partitioned into more homogeneous subgroups by combinations of moderators. This paper describes the R-package metacart, which provides user-friendly functions to conduct meta-CART analyses in R. This package can fit both fixed- and random-effects meta-CART, and can handle dichotomous, categorical, ordinal and continuous moderators. In addition, a new look ahead procedure is presented. The application of the package is illustrated step-by-step using diverse examples.


Author(s):  
Liana R. Taylor ◽  
Avinash Bhati ◽  
Faye S. Taxman

The Washington State Institute for Public Policy (WSIPP) uses meta-analyses to help program administrators identify effective programs that reduce recidivism. The results are displayed as summary effect sizes. Yet, many programs are grouped within a category (such as Intensive Supervision or Correctional Education), even though the features of the programs might suggest they may be very different. The following research question was examined: What program features are related to the effect size in the WSIPP program category? Researchers at ACE! at George Mason University reviewed the studies analyzed by WSIPP and their effect sizes. The meta-regression global models showed recidivism decreased with certain program features, while other program features actually increased recidivism. A multivariate meta-regression showed substantial variation across Cognitive-Behavioral Therapy programs. These preliminary findings suggest the need to further research how differing program features contribute to client-level outcomes, and develop a scheme to better classify programs.


Sign in / Sign up

Export Citation Format

Share Document