scholarly journals Pre-post effect sizes should be avoided in meta-analyses

2016 ◽  
Vol 26 (4) ◽  
pp. 364-368 ◽  
Author(s):  
P. Cuijpers ◽  
E. Weitz ◽  
I. A. Cristea ◽  
J. Twisk

AimsThe standardised mean difference (SMD) is one of the most used effect sizes to indicate the effects of treatments. It indicates the difference between a treatment and comparison group after treatment has ended, in terms of standard deviations. Some meta-analyses, including several highly cited and influential ones, use the pre-post SMD, indicating the difference between baseline and post-test within one (treatment group).MethodsIn this paper, we argue that these pre-post SMDs should be avoided in meta-analyses and we describe the arguments why pre-post SMDs can result in biased outcomes.ResultsOne important reason why pre-post SMDs should be avoided is that the scores on baseline and post-test are not independent of each other. The value for the correlation should be used in the calculation of the SMD, while this value is typically not known. We used data from an ‘individual patient data’ meta-analysis of trials comparing cognitive behaviour therapy and anti-depressive medication, to show that this problem can lead to considerable errors in the estimation of the SMDs. Another even more important reason why pre-post SMDs should be avoided in meta-analyses is that they are influenced by natural processes and characteristics of the patients and settings, and these cannot be discerned from the effects of the intervention. Between-group SMDs are much better because they control for such variables and these variables only affect the between group SMD when they are related to the effects of the intervention.ConclusionsWe conclude that pre-post SMDs should be avoided in meta-analyses as using them probably results in biased outcomes.

Author(s):  
Alistair M. Senior ◽  
Wolfgang Viechtbauer ◽  
Shinichi Nakagawa

AbstractMeta-analyses are frequently used to quantify the difference in the average values of two groups (e.g., control and experimental treatment groups), but examine the difference in the variability (variance) of two groups. For such comparisons, the two relatively new effect size statistics, namely the log-transformed ‘variability ratio’ (the ratio of two standard deviations; lnVR) and the log-transformed ‘CV ratio’ (the ratio of two coefficients of variation; lnCVR) are useful. In practice, lnCVR may be of most use because a treatment may affect the mean and the variance simultaneously. We review current, and propose new, estimators for lnCVR and lnVR. We also present methods for use when the two groups are dependent (e.g., for cross-over and pre-test-post-test designs). A simulation study evaluated the performance of these estimators and we make recommendations about which estimators one should use to minimise bias. We also present two worked examples that illustrate the importance of accounting for the dependence of the two groups. We found that the degree to which dependence is accounted for in the sampling variance estimates can impact heterogeneity parameters such as τ2 (i.e., the between-study variance) and I2 (i.e., the proportion of the total variability due to between-study variance), and even the overall effect, and in turn qualitative interpretations. Meta-analytic comparison of the variability between two groups enables us to ask completely new questions and to gain fresh insights from existing datasets. We encourage researchers to take advantage of these convenient new effect size measures for the meta-analysis of variation.


2011 ◽  
Vol 26 (S2) ◽  
pp. 1475-1475 ◽  
Author(s):  
M. Pfammatter

A series of meta-analyses points to the benefits of cognitive behaviour therapy (CBT) in the treatment of psychosis. However, there are discrepancies in the controlled efficacy of CBT for psychosis depending on the targeted treatment goal or the control condition applied. This raises questions about its indication and therapeutic ingredients.The findings of all existing meta-analyses were integrated. Relevant meta-analyses were identified by searching electronic data bases. In order to compare their findings the reported effect sizes were transformed into a standard effect size measure. Moderator analyses were performed regarding different treatment goals and controls. Furthermore, therapeutic components were related to outcome by calculating weighted mean correlation effect sizes in order to identify essential therapeutic factors. The statistical significance of the effect sizes was determined by computing 95% confidence intervals. Homogeneity tests were applied to examine the consistency of the effects and component-outcome relations.The integration of meta-analytic findings demonstrates considerable differences in the controlled efficacy: CBT for psychosis has long-term effects on persisting positive and negative symptoms, but no effect on acute positive symptoms and limited benefits as an early intervention. Moreover, the advantages compared to non-specific supportive therapies are moderate. Component-outcome relations indicate that cognitive restructuring and coping skills training represent key therapeutic factors. However, component control designs also point to the importance of the therapeutic alliance and motivational processes for therapeutic change. Thus, there is a need to promote analyses of the determinants of a helpful therapeutic relationship and enhanced treatment motivation of people suffering from psychosis.


2008 ◽  
Vol 25 (4) ◽  
pp. 215-228 ◽  
Author(s):  
Prudence Millear ◽  
Poppy Liossis ◽  
Ian M. Shochet ◽  
Herbert Biggs ◽  
Maria Donald

AbstractThere is an urgent need to find strategies to promote positive mental health in the workplace. The current study presents outcomes of a pilot trial of the Promoting Adult Resilience (PAR) program, an innovative mental health promotion program, which is conducted in the workplace over 11 weekly sessions. The PAR program is a strengths-based resilience-building program that integrates interpersonal and cognitive–behaviour therapy (CBT) perspectives. Pre-, post- and follow-up measures on 20 PAR participants from a resource-sector company were compared with a non-intervention-matched comparison group. At follow-up, the PAR group had maintained significant post-test improvements in coping self-efficacy and lower levels of stress and depression, and reported greater work-life fit than the comparison group. The program appeared to be ecologically valid and treatment integrity was maintained. Process evaluations of PAR program showed that skills were rated highly and widely used in everyday life at both post and follow-up measurement times.


Author(s):  
Pim Cuijpers ◽  
Eirini Karyotaki ◽  
Marketa Ciharova ◽  
Clara Miguel ◽  
Hisashi Noma ◽  
...  

AbstractMeta-analyses show that psychotherapies are effective in the treatment of depression in children and adolescents. However, these effects are usually reported in terms of effect sizes. For patients and clinicians, it is important to know whether patients achieve a clinically significant improvement or deterioration. We conducted such a meta-analysis to examine response, clinically significant change, clinically significant deterioration and recovery as outcomes. We searched four bibliographic databases and included 40 randomised trials comparing psychotherapy for youth depression against control conditions. We used a validated method to estimate outcome rates, based on means, standard deviation and N at baseline and post-test. We also calculated numbers-need-to- treat (NNT). The overall response rate in psychotherapies at 2 (±1) months after baseline was 39% (95% CI: 34–45) and 24% (95% CI: 0.19–28) in control conditions (NNT: 6.2). The difference between therapy and control was still significant at 6–12 months after baseline (NNT=7.8). Clinically significant improvement was found in 54% of youth in therapy, compared with 32% in control groups (NNT=5.3); clinically significant deterioration was 6% in therapy, 13% in controls (NNT=5.1); recovery was 58% in therapy, 36% in controls (NNT=3.3). Smaller effects were found in studies with low risk of bias. Psychotherapies for depression in youth are effective compared to control conditions, but more than 60% of youth receiving therapy do not respond. More effective treatments and treatment strategies are clearly needed. Trial registrationhttps://osf.io/84xka


Author(s):  
Piers Steel ◽  
Sjoerd Beugelsdijk ◽  
Herman Aguinis

AbstractMeta-analyses summarize a field’s research base and are therefore highly influential. Despite their value, the standards for an excellent meta-analysis, one that is potentially award-winning, have changed in the last decade. Each step of a meta-analysis is now more formalized, from the identification of relevant articles to coding, moderator analysis, and reporting of results. What was exemplary a decade ago can be somewhat dated today. Using the award-winning meta-analysis by Stahl et al. (Unraveling the effects of cultural diversity in teams: A meta-analysis of research on multicultural work groups. Journal of International Business Studies, 41(4):690–709, 2010) as an exemplar, we adopted a multi-disciplinary approach (e.g., management, psychology, health sciences) to summarize the anatomy (i.e., fundamental components) of a modern meta-analysis, focusing on: (1) data collection (i.e., literature search and screening, coding), (2) data preparation (i.e., treatment of multiple effect sizes, outlier identification and management, publication bias), (3) data analysis (i.e., average effect sizes, heterogeneity of effect sizes, moderator search), and (4) reporting (i.e., transparency and reproducibility, future research directions). In addition, we provide guidelines and a decision-making tree for when even foundational and highly cited meta-analyses should be updated. Based on the latest evidence, we summarize what journal editors and reviewers should expect, authors should provide, and readers (i.e., other researchers, practitioners, and policymakers) should consider about meta-analytic reviews.


2020 ◽  
pp. 1-16
Author(s):  
A. Little ◽  
Christopher Byrne ◽  
Rudi Coetzer

BACKGROUND: Anxiety is a common neuropsychological sequela following traumatic brain injury (TBI). Cognitive Behaviour Therapy (CBT) is a recommended, first-line intervention for anxiety disorders in the non-TBI clinical population, however its effectiveness after TBI remains unclear and findings are inconsistent. OBJECTIVE: There are no current meta-analyses exploring the efficacy of CBT as an intervention for anxiety symptoms following TBI, using controlled trials. The aim of the current study, therefore, was to systematically review and synthesize the evidence from controlled trials for the effectiveness of CBT for anxiety, specifically within the TBI population. METHOD: Three electronic databases (Web of Science, PubMed and PsycInfo) were searched and a systematic review of intervention studies utilising CBT and anxiety related outcome measures in a TBI population was performed through searching three electronic databases. Studies were further evaluated for quality of evidence based on Reichow’s (2011) quality appraisal tool. Baseline and outcome data were extracted from the 10 controlled trials that met the inclusion criteria, and effect sizes were calculated. RESULTS: A random effects meta-analysis identified a small overall effect size (Cohen’s d) of d = –0.26 (95%CI –0.41 to –0.11) of CBT interventions reducing anxiety symptoms following TBI. CONCLUSIONS: This meta-analysis tentatively supports the view that CBT interventions may be effective in reducing anxiety symptoms in some patients following TBI, however the effect sizes are smaller than those reported for non-TBI clinical populations. Clinical implications and limitations of the current meta-analysis are discussed.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2021 ◽  
Vol 5 (1) ◽  
pp. e100135
Author(s):  
Xue Ying Zhang ◽  
Jan Vollert ◽  
Emily S Sena ◽  
Andrew SC Rice ◽  
Nadia Soliman

ObjectiveThigmotaxis is an innate predator avoidance behaviour of rodents and is enhanced when animals are under stress. It is characterised by the preference of a rodent to seek shelter, rather than expose itself to the aversive open area. The behaviour has been proposed to be a measurable construct that can address the impact of pain on rodent behaviour. This systematic review will assess whether thigmotaxis can be influenced by experimental persistent pain and attenuated by pharmacological interventions in rodents.Search strategyWe will conduct search on three electronic databases to identify studies in which thigmotaxis was used as an outcome measure contextualised to a rodent model associated with persistent pain. All studies published until the date of the search will be considered.Screening and annotationTwo independent reviewers will screen studies based on the order of (1) titles and abstracts, and (2) full texts.Data management and reportingFor meta-analysis, we will extract thigmotactic behavioural data and calculate effect sizes. Effect sizes will be combined using a random-effects model. We will assess heterogeneity and identify sources of heterogeneity. A risk-of-bias assessment will be conducted to evaluate study quality. Publication bias will be assessed using funnel plots, Egger’s regression and trim-and-fill analysis. We will also extract stimulus-evoked limb withdrawal data to assess its correlation with thigmotaxis in the same animals. The evidence obtained will provide a comprehensive understanding of the strengths and limitations of using thigmotactic outcome measure in animal pain research so that future experimental designs can be optimised. We will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines and disseminate the review findings through publication and conference presentation.


2012 ◽  
Vol 9 (5) ◽  
pp. 610-620 ◽  
Author(s):  
Thomas A Trikalinos ◽  
Ingram Olkin

Background Many comparative studies report results at multiple time points. Such data are correlated because they pertain to the same patients, but are typically meta-analyzed as separate quantitative syntheses at each time point, ignoring the correlations between time points. Purpose To develop a meta-analytic approach that estimates treatment effects at successive time points and takes account of the stochastic dependencies of those effects. Methods We present both fixed and random effects methods for multivariate meta-analysis of effect sizes reported at multiple time points. We provide formulas for calculating the covariance (and correlations) of the effect sizes at successive time points for four common metrics (log odds ratio, log risk ratio, risk difference, and arcsine difference) based on data reported in the primary studies. We work through an example of a meta-analysis of 17 randomized trials of radiotherapy and chemotherapy versus radiotherapy alone for the postoperative treatment of patients with malignant gliomas, where in each trial survival is assessed at 6, 12, 18, and 24 months post randomization. We also provide software code for the main analyses described in the article. Results We discuss the estimation of fixed and random effects models and explore five options for the structure of the covariance matrix of the random effects. In the example, we compare separate (univariate) meta-analyses at each of the four time points with joint analyses across all four time points using the proposed methods. Although results of univariate and multivariate analyses are generally similar in the example, there are small differences in the magnitude of the effect sizes and the corresponding standard errors. We also discuss conditional multivariate analyses where one compares treatment effects at later time points given observed data at earlier time points. Limitations Simulation and empirical studies are needed to clarify the gains of multivariate analyses compared with separate meta-analyses under a variety of conditions. Conclusions Data reported at multiple time points are multivariate in nature and are efficiently analyzed using multivariate methods. The latter are an attractive alternative or complement to performing separate meta-analyses.


Sign in / Sign up

Export Citation Format

Share Document