scholarly journals A CALL FOR CAUTIOUS INTERPRETATION OF META-ANALYTIC REVIEWS

Author(s):  
Frank Boers ◽  
Lara Bryfonski ◽  
Farahnaz Faez ◽  
Todd McKay

Abstract Meta-analytic reviews collect available empirical studies on a specified domain and calculate the average effect of a factor. Educators as well as researchers exploring a new domain of inquiry may rely on the conclusions from meta-analytic reviews rather than reading multiple primary studies. This article calls for caution in this regard because the outcome of a meta-analysis is determined by how effect sizes are calculated, how factors are defined, and how studies are selected for inclusion. Three recently published meta-analyses are reexamined to illustrate these issues. The first illustrates the risk of conflating effect sizes from studies with different design features; the second illustrates problems with delineating the variable of interest, with implications for cause-effect relations; and the third illustrates the challenge of determining the eligibility of candidate studies. Replication attempts yield outcomes that differ from the three original meta-analyses, suggesting also that conclusions drawn from meta-analyses need to be interpreted cautiously.

2015 ◽  
Vol 24 (2) ◽  
pp. 237-255 ◽  
Author(s):  
Patricia L. Cleave ◽  
Stephanie D. Becker ◽  
Maura K. Curran ◽  
Amanda J. Owen Van Horne ◽  
Marc E. Fey

Purpose This systematic review and meta-analysis critically evaluated the research evidence on the effectiveness of conversational recasts in grammatical development for children with language impairments. Method Two different but complementary reviews were conducted and then integrated. Systematic searches of the literature resulted in 35 articles for the systematic review. Studies that employed a wide variety of study designs were involved, but all examined interventions where recasts were the key component. The meta-analysis only included studies that allowed the calculation of effect sizes, but it did include package interventions in which recasts were a major part. Fourteen studies were included, 7 of which were also in the systematic review. Studies were grouped according to research phase and were rated for quality. Results Study quality and thus strength of evidence varied substantially. Nevertheless, across all phases, the vast majority of studies provided support for the use of recasts. Meta-analyses found average effect sizes of .96 for proximal measures and .76 for distal measures, reflecting a positive benefit of about 0.75 to 1.00 standard deviation. Conclusion The available evidence is limited, but it is supportive of the use of recasts in grammatical intervention. Critical features of recasts in grammatical interventions are discussed.


2021 ◽  
pp. 109634802110669
Author(s):  
Pattamol Kanjanakan ◽  
Dan Zhu ◽  
Tin Doan ◽  
Peter B. Kim

Although a number of empirical studies on work engagement have been conducted in the context of hospitality and tourism, few efforts have been made to consolidate previous findings in this area. Hence, this article explores the current stage of work engagement studies and meta-analyses the relations of work engagement with its antecedents and outcomes in the hospitality and tourism context. Through a systematic review, 134 empirical studies (N = 43,043) published from 2008 to September 2020 were identified. Given that the findings include the trends within work engagement studies and the effect sizes and variabilities of associated relationships, this study contributes to the hospitality and tourism literature by providing a useful reference for future researchers. The findings are discussed in light of their theoretical and practical implications.


2018 ◽  
Author(s):  
Michele B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Hilde Augusteijn ◽  
Elise Anne Victoire Crompvoets ◽  
Jelte M. Wicherts

In this meta-study, we analyzed 2,442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of .26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of in intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small study effects, potentially indicating publication bias and overestimated effects. We found no differences in small study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We conclude that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.


2020 ◽  
Vol 8 (4) ◽  
pp. 36
Author(s):  
Michèle B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Hilde E. M. Augusteijn ◽  
Elise A. V. Crompvoets ◽  
Jelte M. Wicherts

In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small-study effects, potentially indicating publication bias and overestimated effects. We found no differences in small-study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.


2016 ◽  
Vol 42 (2) ◽  
pp. 206-242 ◽  
Author(s):  
Joshua R. Polanin ◽  
Emily A. Hennessy ◽  
Emily E. Tanner-Smith

Meta-analysis is a statistical technique that allows an analyst to synthesize effect sizes from multiple primary studies. To estimate meta-analysis models, the open-source statistical environment R is quickly becoming a popular choice. The meta-analytic community has contributed to this growth by developing numerous packages specific to meta-analysis. The purpose of this study is to locate all publicly available meta-analytic R packages. We located 63 packages via a comprehensive online search. To help elucidate these functionalities to the field, we describe each of the packages, recommend applications for researchers interested in using R for meta-analyses, provide a brief tutorial of two meta-analysis packages, and make suggestions for future meta-analytic R package creators.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2021 ◽  
Vol 5 (1) ◽  
pp. e100135
Author(s):  
Xue Ying Zhang ◽  
Jan Vollert ◽  
Emily S Sena ◽  
Andrew SC Rice ◽  
Nadia Soliman

ObjectiveThigmotaxis is an innate predator avoidance behaviour of rodents and is enhanced when animals are under stress. It is characterised by the preference of a rodent to seek shelter, rather than expose itself to the aversive open area. The behaviour has been proposed to be a measurable construct that can address the impact of pain on rodent behaviour. This systematic review will assess whether thigmotaxis can be influenced by experimental persistent pain and attenuated by pharmacological interventions in rodents.Search strategyWe will conduct search on three electronic databases to identify studies in which thigmotaxis was used as an outcome measure contextualised to a rodent model associated with persistent pain. All studies published until the date of the search will be considered.Screening and annotationTwo independent reviewers will screen studies based on the order of (1) titles and abstracts, and (2) full texts.Data management and reportingFor meta-analysis, we will extract thigmotactic behavioural data and calculate effect sizes. Effect sizes will be combined using a random-effects model. We will assess heterogeneity and identify sources of heterogeneity. A risk-of-bias assessment will be conducted to evaluate study quality. Publication bias will be assessed using funnel plots, Egger’s regression and trim-and-fill analysis. We will also extract stimulus-evoked limb withdrawal data to assess its correlation with thigmotaxis in the same animals. The evidence obtained will provide a comprehensive understanding of the strengths and limitations of using thigmotactic outcome measure in animal pain research so that future experimental designs can be optimised. We will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines and disseminate the review findings through publication and conference presentation.


2016 ◽  
Vol 26 (4) ◽  
pp. 364-368 ◽  
Author(s):  
P. Cuijpers ◽  
E. Weitz ◽  
I. A. Cristea ◽  
J. Twisk

AimsThe standardised mean difference (SMD) is one of the most used effect sizes to indicate the effects of treatments. It indicates the difference between a treatment and comparison group after treatment has ended, in terms of standard deviations. Some meta-analyses, including several highly cited and influential ones, use the pre-post SMD, indicating the difference between baseline and post-test within one (treatment group).MethodsIn this paper, we argue that these pre-post SMDs should be avoided in meta-analyses and we describe the arguments why pre-post SMDs can result in biased outcomes.ResultsOne important reason why pre-post SMDs should be avoided is that the scores on baseline and post-test are not independent of each other. The value for the correlation should be used in the calculation of the SMD, while this value is typically not known. We used data from an ‘individual patient data’ meta-analysis of trials comparing cognitive behaviour therapy and anti-depressive medication, to show that this problem can lead to considerable errors in the estimation of the SMDs. Another even more important reason why pre-post SMDs should be avoided in meta-analyses is that they are influenced by natural processes and characteristics of the patients and settings, and these cannot be discerned from the effects of the intervention. Between-group SMDs are much better because they control for such variables and these variables only affect the between group SMD when they are related to the effects of the intervention.ConclusionsWe conclude that pre-post SMDs should be avoided in meta-analyses as using them probably results in biased outcomes.


2012 ◽  
Vol 9 (5) ◽  
pp. 610-620 ◽  
Author(s):  
Thomas A Trikalinos ◽  
Ingram Olkin

Background Many comparative studies report results at multiple time points. Such data are correlated because they pertain to the same patients, but are typically meta-analyzed as separate quantitative syntheses at each time point, ignoring the correlations between time points. Purpose To develop a meta-analytic approach that estimates treatment effects at successive time points and takes account of the stochastic dependencies of those effects. Methods We present both fixed and random effects methods for multivariate meta-analysis of effect sizes reported at multiple time points. We provide formulas for calculating the covariance (and correlations) of the effect sizes at successive time points for four common metrics (log odds ratio, log risk ratio, risk difference, and arcsine difference) based on data reported in the primary studies. We work through an example of a meta-analysis of 17 randomized trials of radiotherapy and chemotherapy versus radiotherapy alone for the postoperative treatment of patients with malignant gliomas, where in each trial survival is assessed at 6, 12, 18, and 24 months post randomization. We also provide software code for the main analyses described in the article. Results We discuss the estimation of fixed and random effects models and explore five options for the structure of the covariance matrix of the random effects. In the example, we compare separate (univariate) meta-analyses at each of the four time points with joint analyses across all four time points using the proposed methods. Although results of univariate and multivariate analyses are generally similar in the example, there are small differences in the magnitude of the effect sizes and the corresponding standard errors. We also discuss conditional multivariate analyses where one compares treatment effects at later time points given observed data at earlier time points. Limitations Simulation and empirical studies are needed to clarify the gains of multivariate analyses compared with separate meta-analyses under a variety of conditions. Conclusions Data reported at multiple time points are multivariate in nature and are efficiently analyzed using multivariate methods. The latter are an attractive alternative or complement to performing separate meta-analyses.


Sign in / Sign up

Export Citation Format

Share Document