scholarly journals The Galaxy Plot: A New Visualization Tool for Bivariate Meta-Analysis Studies

2020 ◽  
Vol 189 (8) ◽  
pp. 861-869 ◽  
Author(s):  
Chuan Hong ◽  
Rui Duan ◽  
Lingzhen Zeng ◽  
Rebecca A Hubbard ◽  
Thomas Lumley ◽  
...  

Abstract Funnel plots have been widely used to detect small-study effects in the results of univariate meta-analyses. However, there is no existing visualization tool that is the counterpart of the funnel plot in the multivariate setting. We propose a new visualization method, the galaxy plot, which can simultaneously present the effect sizes of bivariate outcomes and their standard errors in a 2-dimensional space. We illustrate the use of the galaxy plot with 2 case studies, including a meta-analysis of hypertension trials with studies from 1979–1991 (Hypertension. 2005;45(5):907–913) and a meta-analysis of structured telephone support or noninvasive telemonitoring with studies from 1966–2015 (Heart. 2017;103(4):255–257). The galaxy plot is an intuitive visualization tool that can aid in interpreting results of multivariate meta-analysis. It preserves all of the information presented by separate funnel plots for each outcome while elucidating more complex features that may only be revealed by examining the joint distribution of the bivariate outcomes.

Methodology ◽  
2020 ◽  
Vol 16 (4) ◽  
pp. 299-315
Author(s):  
Belén Fernández-Castilla ◽  
Lies Declercq ◽  
Laleh Jamshidi ◽  
Susan Natasha Beretvas ◽  
Patrick Onghena ◽  
...  

Meta-analytic datasets can be large, especially when in primary studies multiple effect sizes are reported. The visualization of meta-analytic data is therefore useful to summarize data and understand information reported in primary studies. The gold standard figures in meta-analysis are forest and funnel plots. However, none of these plots can yet account for the existence of multiple effect sizes within primary studies. This manuscript describes extensions to the funnel plot, forest plot and caterpillar plot to adapt them to three-level meta-analyses. For forest plots, we propose to plot the study-specific effects and their precision, and to add additional confidence intervals that reflect the sampling variance of individual effect sizes. For caterpillar plots and funnel plots, we recommend to plot individual effect sizes and averaged study-effect sizes in two separate graphs. For the funnel plot, plotting separate graphs might improve the detection of both publication bias and/or selective outcome reporting bias.


2020 ◽  
Vol 73 (8) ◽  
pp. 1290-1299 ◽  
Author(s):  
Kenneth R Paap ◽  
Lauren Mason ◽  
Brandon Zimiga ◽  
Yocelyne Ayala-Silva ◽  
Matthew Frost

Five recent meta-analyses of the bilingual advantage in executive functioning hypothesis have converged on the outcome that the mean effect size is very small and that the incidence of statistically significant bilingual advantages is very low (about 15% of all comparisons). Those analyses that used the PET-PEESE method to correct for publication bias show mean effect sizes that are not different from zero and sometimes negative. In contrast, van den Noort and colleagues provide a sixth review of 46 studies published before October 31, 2018, that appears to produce a very different outcome, namely that more than half the studies yield clear support for the bilingual advantage hypothesis. We show that the deviance is due in part to search terms that yielded far fewer relevant studies, but more importantly to a subjective method of evaluating the results of each study that enables confirmation biases on the part of study authors and meta-analysts to substantially distort the objective pattern of results. A seventh meta-analysis, by Armstrong and colleagues, reports significant bilingual advantages of g = 0.48 for 32 samples using Simon and Stroop colour–word interference tasks that tested older adults. However, all effects were entered into the funnel plots as positive even though many were negative (bilingual disadvantages). This and other striking anomalies are consistent with the view that confirmation bias can suspend critical judgement and promulgate errors. Meta-analyses that use preregistration and a many-labs collaboration can better control for both publication and experimenter biases.


2020 ◽  
Vol 228 (1) ◽  
pp. 43-49 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. Currently, dedicated graphical displays to depict study-level statistical power in the context of meta-analysis are unavailable. Here, we introduce the sunset (power-enhanced) funnel plot to visualize this relevant information for assessing the credibility, or evidential value, of a set of studies. The sunset funnel plot highlights the statistical power of primary studies to detect an underlying true effect of interest in the well-known funnel display with color-coded power regions and a second power axis. This graphical display allows meta-analysts to incorporate power considerations into classic funnel plot assessments of small-study effects. Nominally significant, but low-powered, studies might be seen as less credible and as more likely being affected by selective reporting. We exemplify the application of the sunset funnel plot with two published meta-analyses from medicine and psychology. Software to create this variation of the funnel plot is provided via a tailored R function. In conclusion, the sunset (power-enhanced) funnel plot is a novel and useful graphical display to critically examine and to present study-level power in the context of meta-analysis.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2021 ◽  
Vol 5 (1) ◽  
pp. e100135
Author(s):  
Xue Ying Zhang ◽  
Jan Vollert ◽  
Emily S Sena ◽  
Andrew SC Rice ◽  
Nadia Soliman

ObjectiveThigmotaxis is an innate predator avoidance behaviour of rodents and is enhanced when animals are under stress. It is characterised by the preference of a rodent to seek shelter, rather than expose itself to the aversive open area. The behaviour has been proposed to be a measurable construct that can address the impact of pain on rodent behaviour. This systematic review will assess whether thigmotaxis can be influenced by experimental persistent pain and attenuated by pharmacological interventions in rodents.Search strategyWe will conduct search on three electronic databases to identify studies in which thigmotaxis was used as an outcome measure contextualised to a rodent model associated with persistent pain. All studies published until the date of the search will be considered.Screening and annotationTwo independent reviewers will screen studies based on the order of (1) titles and abstracts, and (2) full texts.Data management and reportingFor meta-analysis, we will extract thigmotactic behavioural data and calculate effect sizes. Effect sizes will be combined using a random-effects model. We will assess heterogeneity and identify sources of heterogeneity. A risk-of-bias assessment will be conducted to evaluate study quality. Publication bias will be assessed using funnel plots, Egger’s regression and trim-and-fill analysis. We will also extract stimulus-evoked limb withdrawal data to assess its correlation with thigmotaxis in the same animals. The evidence obtained will provide a comprehensive understanding of the strengths and limitations of using thigmotactic outcome measure in animal pain research so that future experimental designs can be optimised. We will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines and disseminate the review findings through publication and conference presentation.


2016 ◽  
Vol 26 (4) ◽  
pp. 364-368 ◽  
Author(s):  
P. Cuijpers ◽  
E. Weitz ◽  
I. A. Cristea ◽  
J. Twisk

AimsThe standardised mean difference (SMD) is one of the most used effect sizes to indicate the effects of treatments. It indicates the difference between a treatment and comparison group after treatment has ended, in terms of standard deviations. Some meta-analyses, including several highly cited and influential ones, use the pre-post SMD, indicating the difference between baseline and post-test within one (treatment group).MethodsIn this paper, we argue that these pre-post SMDs should be avoided in meta-analyses and we describe the arguments why pre-post SMDs can result in biased outcomes.ResultsOne important reason why pre-post SMDs should be avoided is that the scores on baseline and post-test are not independent of each other. The value for the correlation should be used in the calculation of the SMD, while this value is typically not known. We used data from an ‘individual patient data’ meta-analysis of trials comparing cognitive behaviour therapy and anti-depressive medication, to show that this problem can lead to considerable errors in the estimation of the SMDs. Another even more important reason why pre-post SMDs should be avoided in meta-analyses is that they are influenced by natural processes and characteristics of the patients and settings, and these cannot be discerned from the effects of the intervention. Between-group SMDs are much better because they control for such variables and these variables only affect the between group SMD when they are related to the effects of the intervention.ConclusionsWe conclude that pre-post SMDs should be avoided in meta-analyses as using them probably results in biased outcomes.


2012 ◽  
Vol 9 (5) ◽  
pp. 610-620 ◽  
Author(s):  
Thomas A Trikalinos ◽  
Ingram Olkin

Background Many comparative studies report results at multiple time points. Such data are correlated because they pertain to the same patients, but are typically meta-analyzed as separate quantitative syntheses at each time point, ignoring the correlations between time points. Purpose To develop a meta-analytic approach that estimates treatment effects at successive time points and takes account of the stochastic dependencies of those effects. Methods We present both fixed and random effects methods for multivariate meta-analysis of effect sizes reported at multiple time points. We provide formulas for calculating the covariance (and correlations) of the effect sizes at successive time points for four common metrics (log odds ratio, log risk ratio, risk difference, and arcsine difference) based on data reported in the primary studies. We work through an example of a meta-analysis of 17 randomized trials of radiotherapy and chemotherapy versus radiotherapy alone for the postoperative treatment of patients with malignant gliomas, where in each trial survival is assessed at 6, 12, 18, and 24 months post randomization. We also provide software code for the main analyses described in the article. Results We discuss the estimation of fixed and random effects models and explore five options for the structure of the covariance matrix of the random effects. In the example, we compare separate (univariate) meta-analyses at each of the four time points with joint analyses across all four time points using the proposed methods. Although results of univariate and multivariate analyses are generally similar in the example, there are small differences in the magnitude of the effect sizes and the corresponding standard errors. We also discuss conditional multivariate analyses where one compares treatment effects at later time points given observed data at earlier time points. Limitations Simulation and empirical studies are needed to clarify the gains of multivariate analyses compared with separate meta-analyses under a variety of conditions. Conclusions Data reported at multiple time points are multivariate in nature and are efficiently analyzed using multivariate methods. The latter are an attractive alternative or complement to performing separate meta-analyses.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.


2018 ◽  
Vol 21 (1) ◽  
pp. 206-224 ◽  
Author(s):  
Naixue Cui ◽  
Jianghong Liu

The relationship between three types of child maltreatment, including physical abuse, emotional abuse and neglect, and childhood behavior problems in Mainland China, has not been systematically examined. This meta-analysis reviewed findings from 42 studies conducted in 98,749 children in Mainland China and analyzed the pooled effect sizes of the associations between child maltreatment and childhood behavior problems, heterogeneity in study findings, and publication bias. In addition, this study explored cross-study similarities/differences by comparing the pooled estimates with findings from five existing meta-analyses. Equivalent small-to-moderate effect sizes emerged in the relationships between the three types of maltreatment and child externalizing and internalizing behaviors, except that emotional abuse related more to internalizing than externalizing behaviors. Considerable heterogeneity exists among the 42 studies. Weak evidence suggests that child gender and reporter of emotional abuse may moderate the strengths of the relationships between child maltreatment and behavior problems. No indication of publication bias emerged. Cross-study comparisons show that the pooled effect sizes in this meta-analysis are about equal to those reported in the five meta-analyses conducted in child and adult populations across the world. Findings urge relevant agencies in Mainland China to build an effective child protection system to prevent child maltreatment.


Sign in / Sign up

Export Citation Format

Share Document