scholarly journals A Statistical Method for Synthesizing Meta-Analyses

2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.

2020 ◽  
pp. 106907272098503
Author(s):  
Francis Milot-Lapointe ◽  
Yann Le Corff ◽  
Nicole Arifoulline

This article reports on the results of the first meta-analysis of the association between working alliance and outcomes of individual career counseling. This random-effects meta-analysis included 18 published and unpublished studies that produced a weighted mean effect size of r = .42. This effect size was heterogeneous across studies. Separate meta-analyses were conducted for several types of outcomes: Career outcomes, mental health outcomes, and client-perceived quality of the intervention. Average effect sizes for the association between working alliance and types of outcomes were .28, .18 and .62, respectively. Moderator analyses indicated that the overall mean effect size ( r =.42) varied in a large proportion as a function of the type of outcomes and the time of assessment of working alliance (first session, mid or at termination of the counseling service). Our results confirm that working alliance is associated to career counseling effectiveness and suggest that career counselors should emphasize on the working alliance during the career counseling process. In conclusion, this article provides suggestions for practice in individual career counseling and avenues of research on working alliance in this context.


Author(s):  
L. Streiner David

Meta-analysis is a technique for combining the results of many studies in a rigorous and systematic manner, to allow us to better assess prevalence rates for different types of gambling and determine which interventions have the best evidence regarding their effectiveness and efficacy. Meta-analysis consists of (a) a comprehensive search for all available evidence; (b) the use of applying explicit criteria for determining which articles to include; (c) determination of an effect size for each study; and (d) the pooling of effect sizes across studies to end up with a global estimate of the prevalence or the effectiveness of a treatment. This paper begins with a discussion of why meta-analyses are useful, followed by a 12-step program for conducting a meta-analysis. This program can be used both by people planning to do such an analysis, as well as by readers of a meta-analysis, to evaluate how well it was carried out.


1990 ◽  
Vol 24 (3) ◽  
pp. 405-415 ◽  
Author(s):  
Nathaniel McConaghy

Meta-analysis replaced statistical significance with effect size in the hope of resolving controversy concerning evaluation of treatment effects. Statistical significance measured reliability of the effect of treatment, not its efficacy. It was strongly influenced by the number of subjects investigated. Effect size as assessed originally, eliminated this influence but by standardizing the size of the treatment effect could distort it. Meta-analyses which combine the results of studies which employ different subject types, outcome measures, treatment aims, no-treatment rather than placebo controls or therapists with varying experience can be misleading. To ensure discussion of these variables meta-analyses should be used as an aid rather than a substitute for literature review. While meta-analyses produce contradictory findings, it seems unwise to rely on the conclusions of an individual analysis. Their consistent finding that placebo treatments obtain markedly higher effect sizes than no treatment hopefully will render the use of untreated control groups obsolete.


2019 ◽  
Vol 111 (10) ◽  
pp. 1009-1015
Author(s):  
Todd S Horowitz ◽  
Melissa Treviño ◽  
Ingrid M Gooch ◽  
Korrina A Duffy

Abstract A large body of evidence indicates that cancer survivors who have undergone chemotherapy have cognitive impairments. Substantial disagreement exists regarding which cognitive domains are impaired in this population. We suggest that is in part due to inconsistency in how neuropsychological tests are assigned to cognitive domains. The purpose of this paper is to critically analyze the meta-analytic literature on cancer-related cognitive impairments (CRCI) to quantify this inconsistency. We identified all neuropsychological tests reported in seven meta-analyses of the CRCI literature. Although effect sizes were generally negative (indicating impairment), every domain was declared to be impaired in at least one meta-analysis and unimpaired in at least one other meta-analysis. We plotted summary effect sizes from all the meta-analyses and quantified disagreement by computing the observed and ideal distributions of the one-way χ2 statistic. The actual χ2 distributions were noticeably more peaked and shifted to the left than the ideal distributions, indicating substantial disagreement among the meta-analyses in how neuropsychological tests were categorized to domains. A better understanding of the profile of impairments in CRCI is essential for developing effective remediation methods. To accomplish this goal, the research field needs to promote better agreement on how to measure specific cognitive functions.


2020 ◽  
Vol 6 (2) ◽  
pp. 112-127
Author(s):  
Laurențiu Maricuțoiu

The present paper discusses the fundamental principles of meta-analysis, as a statistical method for summarising results of correlational studies. We approach fundamental issues such as: the finality of meta-analysis and the problems associated with study artefacts. The paper also contains recommendations for: selecting the studies for meta-analysis, identifying the relevant information within these studies, computing mean effect sizes, confidence intervals and heterogeneity indexes of the mean effect size. Finally, we present indications for reporting meta-analysis results.


2012 ◽  
Vol 82 (3) ◽  
pp. 300-329 ◽  
Author(s):  
Erin Marie Furtak ◽  
Tina Seidel ◽  
Heidi Iverson ◽  
Derek C. Briggs

Although previous meta-analyses have indicated a connection between inquiry-based teaching and improved student learning, the type of instruction characterized as inquiry based has varied greatly, and few have focused on the extent to which activities are led by the teacher or student. This meta-analysis introduces a framework for inquiry-based teaching that distinguishes between cognitive features of the activity and degree of guidance given to students. This framework is used to code 37 experimental and quasi-experimental studies published between 1996 and 2006, a decade during which inquiry was the main focus of science education reform. The overall mean effect size is .50. Studies that contrasted epistemic activities or the combination of procedural, epistemic, and social activities had the highest mean effect sizes. Furthermore, studies involving teacher-led activities had mean effect sizes about .40 larger than those with student-led conditions. The importance of establishing the validity of the treatment construct in meta-analyses is also discussed.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Yoo Jung Park ◽  
Sun Wook Park ◽  
Han Suk Lee

Objectives. The goals of this study were to assess the effectiveness of WBV (whole body vibration) training through an analysis of effect sizes, identify advantages of WBV training, and suggest other effective treatment methods. Methods. Four databases, namely, EMBASE, PubMed, EBSCO, and Web of Science, were used to collect articles on vibration. Keywords such as “vibration” and “stroke” were used in the search for published articles. Consequently, eleven studies were selected in the second screening using meta-analyses. Results. The total effect size of patients with dementia in the studies was 0.25, which was small. The effect size of spasticity was the greatest at 1.24 (high), followed by metabolism at 0.99 (high), balance, muscle strength, gait, and circulation in the decreasing order of effect size. Conclusions. The effect sizes for muscle strength and balance and gait function, all of which play an important role in performance of daily activities, were small. In contrast, effect sizes for bone metabolism and spasticity were moderate. This suggests that WBV training may provide a safe, alternative treatment method for improving the symptoms of stroke in patients.


1994 ◽  
Vol 5 (6) ◽  
pp. 329-334 ◽  
Author(s):  
Robert Rosenthal ◽  
Donald B. Rubin

We introduce a new, readily computed statistic, the counternull value of an obtained effect size, which is the nonnull magnitude of effect size that is supported by exactly the same amount of evidence as supports the null value of the effect size In other words, if the counternull value were taken as the null hypothesis, the resulting p value would be the same as the obtained p value for the actual null hypothesis Reporting the counternull, in addition to the p value, virtually eliminates two common errors (a) equating failure to reject the null with the estimation of the effect size as equal to zero and (b) taking the rejection of a null hypothesis on the basis of a significant p value to imply a scientifically important finding In many common situations with a one-degree-of-freedom effect size, the value of the counternull is simply twice the magnitude of the obtained effect size, but the counternull is defined in general, even with multi-degree-of-freedom effect sizes, and therefore can be applied when a confidence interval cannot be The use of the counter-null can be especially useful in meta-analyses when evaluating the scientific importance of summary effect sizes


2020 ◽  
Author(s):  
Molly Lewis ◽  
Maya B Mathur ◽  
Tyler VanderWeele ◽  
Michael C. Frank

What is the best way to estimate the size of important effects? Should we aggregate across disparate findings using statistical meta-analysis, or instead run large, multi-lab replications (MLR)? A recent paper by Kvarven, Strømland, and Johannesson (2020) compared effect size estimates derived from these two different methods for 15 different psychological phenomena. The authors report that, for the same phenomenon, the meta-analytic estimate tends to be about three times larger than the MLR estimate. These results pose an important puzzle: What is the relationship between these two estimates? Kvarven et al. suggest that their results undermine the value of meta-analysis. In contrast, we argue that both meta-analysis and MLR are informative, and that the discrepancy between estimates obtained via the two methods is in fact still unexplained. Informed by re-analyses of Kvarven et al.’s data and by other empirical evidence, we discuss possible sources of this discrepancy and argue that understanding the relationship between estimates obtained from these two methods is an important puzzle for future meta-scientific research.


1998 ◽  
Vol 172 (3) ◽  
pp. 227-231 ◽  
Author(s):  
Joanna Moncrieff ◽  
Simon Wessely ◽  
Rebecca Hardy

BackgroundUnblinding effects may-introduce bias into clinical trials. The use of active placebos to mimic side-effects of medication may therefore produce more rigorous evidence on the efficacy of antidepressants.MethodTrials comparing antidepressants with active placebos were located. A standard measure of effect was calculated for each trial and weighted pooled estimates obtained. Heterogeneity was examined and sensitivity analyses performed. A subgroup analysis of in-patient and out-patient trials was conducted.ResultsOnly two of the nine studies examined produced effect sizes which showed a consistent significant difference in favour of the active drug. Combining all studies produced pooled effect size estimates of between 0.41 (0.27–0.56) and 0.46 (0.31–0.60) with high heterogeneity due to one strongly positive trial. Sensitivity analyses excluding this and one other trial reduced the pooled effect to between 0.21 (0.03–0.38) and 0.27 (0.10–0.45).ConclusionsMeta-analysis is very sensitive to decisions about exclusions. Previous general meta-analyses have found combined effect sizes in the range 0.4–0.8. The more conservative estimates produced here suggest that unblinding effects may inflate the efficacy of antidepressants in trials using inert placebos.


Sign in / Sign up

Export Citation Format

Share Document