Metacognitive training for schizophrenia spectrum patients: a meta-analysis on outcome studies

2015 ◽  
Vol 46 (1) ◽  
pp. 47-57 ◽  
Author(s):  
B. van Oosterhout ◽  
F. Smit ◽  
L. Krabbendam ◽  
S. Castelein ◽  
A. B. P. Staring ◽  
...  

Background.Metacognitive training (MCT) for schizophrenia spectrum is widely implemented. It is timely to systematically review the literature and to conduct a meta-analysis.Method.Eligible studies were selected from several sources (databases and expert suggestions). Criteria included comparative studies with a MCT condition measuring positive symptoms and/or delusions and/or data-gathering bias. Three meta-analyses were conducted on data gathering (three studies; 219 participants), delusions (seven studies; 500 participants) and positive symptoms (nine studies; 436 participants). Hedges’ g is reported as the effect size of interest. Statistical power was sufficient to detect small to moderate effects.Results.All analyses yielded small non-significant effect sizes (0.26 for positive symptoms; 0.22 for delusions; 0.31 for data-gathering bias). Corrections for publication bias further reduced the effect sizes to 0.21 for positive symptoms and to 0.03 for delusions. In blinded studies, the corrected effect sizes were 0.22 for positive symptoms and 0.03 for delusions. In studies using proper intention-to-treat statistics the effect sizes were 0.10 for positive symptoms and −0.02 for delusions. The moderate to high heterogeneity in most analyses suggests that processes other than MCT alone have an impact on the results.Conclusions.The studies so far do not support a positive effect for MCT on positive symptoms, delusions and data gathering. The methodology of most studies was poor and sensitivity analyses to control for methodological flaws reduced the effect sizes considerably. More rigorous research would be helpful in order to create enough statistical power to detect small effect sizes and to reduce heterogeneity. Limitations and strengths are discussed.

2004 ◽  
Vol 43 (05) ◽  
pp. 470-474 ◽  
Author(s):  
N. Victor ◽  
S. Witte

Summary Objectives: Noninferiority trials have become commonplace in recent years. Like individual clinical trials, meta-analyses can also investigate noninferiority. However, certain important topics have to be considered. Methods: The proposed methods in this paper have their origin in the framework of noninferiority trials and meta-analyses. This paper can therefore be seen as a combination of both fields. Two issues are highlighted in the paper; difficulties in the choice of delta for a noninferiority meta-analysis leading to different deltas and methods for meta-analyses with different analysis sets, based on the full-analysis set with the intention-to-treat principle or the per-protocol population. Analytical methods, sensitivity analyses, meta-regression, and a bivariate method are introduced. The proposed graphical presentations support the analytical results. Conclusion: The confidence interval approach using meta-regression or bivariate methods is appropriate using both analysis sets for meta-analyses investigating noninferiority.


Author(s):  
Yayouk E. Willems ◽  
Jian-bin Li ◽  
Anne M. Hendriks ◽  
Meike Bartels ◽  
Catrin Finkenauer

Theoretical studies propose an association between family violence and low self-control in adolescence, yet empirical findings of this association are inconclusive. The aim of the present research was to systematically summarize available findings on the relation between family violence and self-control across adolescence. We included 27 studies with 143 effect sizes, representing more than 25,000 participants of eight countries from early to late adolescence. Applying a multi-level meta-analyses, taking dependency between effect sizes into account while retaining statistical power, we examined the magnitude and direction of the overall effect size. Additionally, we investigated whether theoretical moderators (e.g., age, gender, country), and methodological moderators (cross-sectional/longitudinal, informant) influenced the magnitude of the association between family violence and self-control. Our results revealed that family violence and self-control have a small to moderate significant negative association (r = -.191). This association did not vary across gender, country, and informants. The strength of the association, however, decreased with age and in longitudinal studies. This finding provides evidence that researchers and clinicians may expect low self-control in the wake of family violence, especially in early adolescence. Recommendations for future research in the area are discussed.


1998 ◽  
Vol 172 (3) ◽  
pp. 227-231 ◽  
Author(s):  
Joanna Moncrieff ◽  
Simon Wessely ◽  
Rebecca Hardy

BackgroundUnblinding effects may-introduce bias into clinical trials. The use of active placebos to mimic side-effects of medication may therefore produce more rigorous evidence on the efficacy of antidepressants.MethodTrials comparing antidepressants with active placebos were located. A standard measure of effect was calculated for each trial and weighted pooled estimates obtained. Heterogeneity was examined and sensitivity analyses performed. A subgroup analysis of in-patient and out-patient trials was conducted.ResultsOnly two of the nine studies examined produced effect sizes which showed a consistent significant difference in favour of the active drug. Combining all studies produced pooled effect size estimates of between 0.41 (0.27–0.56) and 0.46 (0.31–0.60) with high heterogeneity due to one strongly positive trial. Sensitivity analyses excluding this and one other trial reduced the pooled effect to between 0.21 (0.03–0.38) and 0.27 (0.10–0.45).ConclusionsMeta-analysis is very sensitive to decisions about exclusions. Previous general meta-analyses have found combined effect sizes in the range 0.4–0.8. The more conservative estimates produced here suggest that unblinding effects may inflate the efficacy of antidepressants in trials using inert placebos.


2018 ◽  
Vol 11 (10) ◽  
pp. 42 ◽  
Author(s):  
Yujin Lee ◽  
Mary M. Capraro ◽  
Robert M. Capraro ◽  
Ali Bicer

Although algebraic reasoning has been considered as an important factor influencing students’ mathematical performance, many students struggle to build concrete algebraic reasoning. Metacognitive training has been regarded as one effective method to develop students’ algebraic reasoning; however, there are no published meta-analyses that include an examination of the effects of metacognitive training on students’ algebraic reasoning. Therefore, the purpose of this meta-analysis was to examine the impact of metacognitive training on students’ algebraic reasoning. Eighteen studies with 22 effect sizes were selected for inclusion in the present meta-analysis. In the process of the analysis, one study was determined as an outlier; therefore, another meta-analysis was reconstructed without the outlier to calculate more robust results. The findings indicated that the overall effect size without an outlier equaled d=0.973 with SE=0.196. Q=20.201 (p<.05) and I2=0.997, which indicated heterogeneity of the studies. These results showed that the metacognitive training had a statistically significant positive impact on students’ algebraic reasoning.


2021 ◽  
Vol 30 ◽  
Author(s):  
Pim Cuijpers ◽  
Jason W. Griffin ◽  
Toshi A. Furukawa

Abstract One of the most used methods to examine sources of heterogeneity in meta-analyses is the so-called ‘subgroup analysis’. In a subgroup analysis, the included studies are divided into two or more subgroups, and it is tested whether the pooled effect sizes found in these subgroups differ significantly from each other. Subgroup analyses can be considered as a core component of most published meta-analyses. One important problem of subgroup analyses is the lack of statistical power to find significant differences between subgroups. In this paper, we explore the power problems of subgroup analyses in more detail, using ‘metapower’, a recently developed statistical package in R to examine power in meta-analyses, including subgroup analyses. We show that subgroup analyses require many more included studies in a meta-analysis than are needed for the main analyses. We work out an example of an ‘average’ meta-analysis, in which a subgroup analysis requires 3–4 times the number of studies that are needed for the main analysis to have sufficient power. This number of studies increases exponentially with decreasing effect sizes and when the studies are not evenly divided over the subgroups. Higher heterogeneity also requires increasing numbers of studies. We conclude that subgroup analyses remain an important method to examine potential sources of heterogeneity in meta-analyses, but that meta-analysts should keep in mind that power is very low for most subgroup analyses. As in any statistical evaluation, researchers should not rely on a test and p-value to interpret results, but should compare the confidence intervals and interpret results carefully.


2020 ◽  
Author(s):  
Daniel S Quintana

The neuropeptide oxytocin has attracted substantial research interest for its role in behaviour and cognition; however, the evidence for its effects have been mixed. Meta-analysis is viewed as the gold-standard for synthesizing evidence, but the evidential value of a meta-analysis is dependent on the evidential value of the studies it synthesizes, and the analytical approaches used to derive conclusions. To assess the evidential value of oxytocin administration meta-analyses, this study calculated the statistical power of 107 studies from 35 meta-analyses and assessed the statistical equivalence of reported results. The mean statistical power across all studies was 12.2% and there has been no noticeable improvement in power over an eight-year period. None of the 26 non-significant meta-analyses were statistically equivalent, assuming a smallest effect size of interest of 0.1. Altogether, most oxytocin treatment study designs are statistically underpowered to either detect or reject a wide range of effect sizes that scholars may find worthwhile.


2020 ◽  
Vol 228 (1) ◽  
pp. 43-49 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. Currently, dedicated graphical displays to depict study-level statistical power in the context of meta-analysis are unavailable. Here, we introduce the sunset (power-enhanced) funnel plot to visualize this relevant information for assessing the credibility, or evidential value, of a set of studies. The sunset funnel plot highlights the statistical power of primary studies to detect an underlying true effect of interest in the well-known funnel display with color-coded power regions and a second power axis. This graphical display allows meta-analysts to incorporate power considerations into classic funnel plot assessments of small-study effects. Nominally significant, but low-powered, studies might be seen as less credible and as more likely being affected by selective reporting. We exemplify the application of the sunset funnel plot with two published meta-analyses from medicine and psychology. Software to create this variation of the funnel plot is provided via a tailored R function. In conclusion, the sunset (power-enhanced) funnel plot is a novel and useful graphical display to critically examine and to present study-level power in the context of meta-analysis.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


Sign in / Sign up

Export Citation Format

Share Document