scholarly journals Power Approximations for Meta-Analysis of Dependent Effect Sizes

2022 ◽  
Author(s):  
Mikkel Helding Vembye ◽  
James E Pustejovsky ◽  
Terri Pigott

Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power approximations for tests of average effect sizes based upon the most common models for handling dependent effect sizes. In a Monte Carlo simulation, we show that the new power formulas can accurately approximate the true power of common meta-analytic models for dependent effect sizes. Lastly, we investigate the Type I error rate and power for several common models, finding that tests using robust variance estimation provide better Type I error calibration than tests with model-based variance estimation. We consider implications for practice with respect to selecting a working model and an inferential approach.

2019 ◽  
Author(s):  
Melissa Angelina Rodgers ◽  
James E Pustejovsky

Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the Trim & Fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger’s regression test that incorporate robust variance estimation (RVE) or multi-level meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects.


2020 ◽  
Author(s):  
James E Pustejovsky ◽  
Elizabeth Tipton

In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single meta-regression model, even when the nature of the dependence is unknown. RVE uses a working model of the dependence structure, but the two currently available working models are limited to each describing a single type of dependence. Drawing on flexible tools from multivariate meta-analysis, this paper describes an expanded range of working models, along with accompanying estimation methods, which offer benefits in terms of better capturing the types of data structures that occur in practice and improving the efficiency of meta-regression estimates. We describe how the methods can be implemented using existing software (the ‘metafor’ and ‘clubSandwich’ packages for R) and illustrate the approach in a meta-analysis of randomized trials examining the effects of brief alcohol interventions for adolescents and young adults.


2012 ◽  
Vol 45 (2) ◽  
pp. 576-594 ◽  
Author(s):  
Wim Van den Noortgate ◽  
José Antonio López-López ◽  
Fulgencio Marín-Martínez ◽  
Julio Sánchez-Meca

1999 ◽  
Vol 2 ◽  
pp. 32-38 ◽  
Author(s):  
Fulgencio Marín-Martínez ◽  
Julio Sánchez-Meca

When a primary study includes several indicators of the same construct, the usual strategy to meta-analytically integrate the multiple effect sizes is to average them within the study. In this paper, the numerical and conceptual differences among three procedures for averaging dependent effect sizes are shown. The procedures are the simple arithmetic mean, the Hedges and Olkin (1985) procedure, and the Rosenthal and Rubin (1986) procedure. Whereas the simple arithmetic mean ignores the dependence among effect sizes, both the procedures by Hedges and Olkin and Rosenthal and Rubin take into account the correlational structure of the effect sizes, although in a different way. Rosenthal and Rubin's procedure provides the effect size for a single composite variable made up of the multiple effect sizes, whereas Hedges and Olkin's procedure presents an effect size estimate of the standard variable. The three procedures were applied to 54 conditions, where the magnitude and homogeneity of both effect sizes and correlation matrix among effect sizes were manipulated. Rosenthal and Rubin's procedure showed the highest estimates, followed by the simple mean, and the Hedges and Olkin procedure, this last having the lowest estimates. These differences are not trivial in a meta-analysis, where the aims must guide the selection of one of the procedures.


2021 ◽  
Vol 12 ◽  
Author(s):  
Valeria Sebri ◽  
Ilaria Durosini ◽  
Stefano Triberti ◽  
Gabriella Pravettoni

The experience of breast cancer and related treatments has notable effects on women's mental health. Among them, the subjective perception of the body or body image (BI) is altered. Such alterations deserve to be properly treated because they augment the risk for depression and mood disorders, and impair intimate relationships. A number of studies revealed that focused psychological interventions are effective in reducing BI issues related to breast cancer. However, findings are inconsistent regarding the dimension of such effects. This meta-analysis synthesizes and quantifies the efficacy of psychological interventions for BI in breast cancer patients and survivors. Additionally, since sexual functioning emerged as a relevant aspect in the BI distortions, we explored the efficacy of psychological interventions on sexual functioning related to BI in breast cancer patients and survivors. The literature search for relevant contributions was carried out in March 2020 through the following electronic databases: Scopus, PsycINFO, and ProQUEST. Only articles available in English and that featured psychological interventions for body image in breast cancer patients or survivors with controls were included. Seven articles with 17 dependent effect sizes were selected for this meta-analysis. Variables were grouped into: Body Image (six studies, nine dependent effect sizes) and Sexual Functioning Related to the Body Image in breast cancer patients and survivors (four studies, eight dependent effect sizes). The three-level meta-analysis showed a statistically significant effect for Body Image [g = 0.50; 95% CI (0.08; 0.93); p < 0.05] but no significant results for Sexual Functioning Related to Body Image [g = 0.33; 95% CI (−0.20; 0.85); p = 0.19]. These results suggest that psychological interventions are effective in reducing body image issues but not in reducing sexual functioning issues related to body image in breast cancer patients and survivors. Future review efforts may include gray literature and qualitative studies to better understand body image and sexual functioning issues in breast cancer patients. Also, high-quality studies are needed to inform future meta-analyses.


Sign in / Sign up

Export Citation Format

Share Document