scholarly journals Meta-analysis with zero-event studies: a comparative study with application to COVID-19 data

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Jia-Jin Wei ◽  
En-Xuan Lin ◽  
Jian-Dong Shi ◽  
Ke Yang ◽  
Zong-Liang Hu ◽  
...  

Abstract Background Meta-analysis is a statistical method to synthesize evidence from a number of independent studies, including those from clinical studies with binary outcomes. In practice, when there are zero events in one or both groups, it may cause statistical problems in the subsequent analysis. Methods In this paper, by considering the relative risk as the effect size, we conduct a comparative study that consists of four continuity correction methods and another state-of-the-art method without the continuity correction, namely the generalized linear mixed models (GLMMs). To further advance the literature, we also introduce a new method of the continuity correction for estimating the relative risk. Results From the simulation studies, the new method performs well in terms of mean squared error when there are few studies. In contrast, the generalized linear mixed model performs the best when the number of studies is large. In addition, by reanalyzing recent coronavirus disease 2019 (COVID-19) data, it is evident that the double-zero-event studies impact the estimate of the mean effect size. Conclusions We recommend the new method to handle the zero-event studies when there are few studies in a meta-analysis, or instead use the GLMM when the number of studies is large. The double-zero-event studies may be informative, and so we suggest not excluding them.

2021 ◽  
Author(s):  
Jiajin Wei ◽  
Enxuan Lin ◽  
Jiandong Shi ◽  
Ke Yang ◽  
Zongliang Hu ◽  
...  

Abstract Background: Meta-analysis is a statistical method to synthesize evidence from a number of independent studies, including those from clinical studies with binary outcomes. In practice, when there are zero events in one or both groups, it may cause statistical problems in the subsequent analysis. Methods: In this paper, by considering the relative risk as the effect size, we conduct a comparative study that consists of four continuity correction methods and another state-of-the-art method without the continuity correction, namely the generalized linear mixed models. To further advance the literature, we also introduce a new method of the continuity correction for estimating the relative risk. Results: From the simulation studies, the new method performs well in terms of mean squared error when there are few studies. In contrast, the generalized linear mixed model performs the best when the number of studies is large. In addition, by reanalyzing a recent COVID-19 data, it is evident that the double-zero-event studies impact on the estimate of the mean effect size.Conclusion: We recommend the new method to handle the zero-event studies when there are only few studies in the meta-analysis, or instead use the GLMM when the number of studies is large. The double-zero-event study may beinformative, and so we suggest not excluding them.


2020 ◽  
Author(s):  
Mengli Xiao ◽  
Lifeng Lin ◽  
James S. Hodges ◽  
Chang Xu ◽  
Haitao Chu

Objectives: High-quality meta-analyses on COVID-19 are in urgent demand for evidence-based decision making. However, conventional approaches exclude double-zero-event studies (DZS) from meta-analyses. We assessed whether including such studies impacts the conclusions in a recent systematic urgent review on prevention measures for preventing person-to-person transmission of COVID-19. Study designs and settings: We extracted data for meta-analyses containing DZS from a recent review that assessed the effects of physical distancing, face masks, and eye protection for preventing person-to-person transmission. A bivariate generalized linear mixed model was used to re-do the meta-analyses with DZS included. We compared the synthesized relative risks (RRs) of the three prevention measures, their 95% confidence intervals (CI), and significance tests (at the level of 0.05) including and excluding DZS. Results: The re-analyzed COVID-19 data containing DZS involved a total of 1,784 participants who were not considered in the original review. Including DZS noticeably changed the synthesized RRs and 95% CIs of several interventions. For the meta-analysis of the effect of physical distancing, the RR of COVID-19 decreased from 0.15 (95% CI, 0.03 to 0.73) to 0.07 (95% CI, 0.01 to 0.98). For several meta-analyses, the statistical significance of the synthesized RR was changed. The RR of eye protection with a physical distance of 2 m and the RR of physical distancing when using N95 respirators were no longer statistically significant after including DZS. Conclusions: DZS may contain useful information. Sensitivity analyses that include DZS in meta-analysis are recommended.


2021 ◽  
Author(s):  
Chang Xu ◽  
Lifeng Lin

AbstractObjectiveThe common approach to meta-analysis with double-zero studies is to remove such studies. Our previous work has confirmed that exclusion of these studies may impact the results. In this study, we undertook extensive simulations to investigate how the results of meta-analyses would be impacted in relation to the proportion of such studies.MethodsTwo standard generalized linear mixed models (GLMMs) were employed for the meta-analysis. The statistical properties of the two GLMMs were first examined in terms of percentage bias, mean squared error, and coverage. We then repeated all the meta-analyses after excluding double-zero studies. Direction of estimated effects and p-values for including against excluding double-zero studies were compared in nine ascending groups classified by the proportion of double-zero studies within a meta-analysis.ResultsBased on 50,000 simulated meta-analyses, the two GLMMs almost achieved unbiased estimation and reasonable coverage in most of the situations. When excluding double-zero studies, 0.00% to 4.47% of the meta-analyses changed the direction of effect size, and 0.61% to 8.78% changed direction of the significance of p-value. When the proportion of double-zero studies increased in a meta-analysis, the probability of the effect size changed the direction increased; when the proportion was about 40% to 60%, it has the largest impact on the change of p-values.ConclusionDouble-zero studies can impact the results of meta-analysis and excluding them may be problematic. The impact of such studies on meta-analysis varies by the proportion of such studies within a meta-analysis.


2008 ◽  
Vol 71 (7) ◽  
pp. 1330-1337 ◽  
Author(s):  
U. GONZALES BARRON ◽  
D. BERGIN ◽  
F. BUTLER

In the field of food safety, meta-analysis can be used to combine results of prevalence studies of pathogens at critical stages within the food processing chain so that policy makers can access reliable and concise information on the effectiveness of interventions for controlling and preventing foodborne illnesses in humans. The objective of this work was to demonstrate the applicability of a parametric approach of meta-analysis to the specific case of determining the overall effect of chilling on Salmonella prevalence on pig carcasses. A meta-analysis was performed on each of two parameters measuring effect size for binary outcomes (relative risk and risk difference). Both meta-analyses confirmed that the chilling operation has a significant beneficial effect (P < 0.001) on the reduction of Salmonella prevalence on pig carcasses. Because risk difference is a parameter sensitive to the differences across studies in carcass swab areas and Salmonella detection methods, its meta-analysis highly reflected this heterogeneity (P < 0.001). However, parameterization of relative risk, not being biased by the above sources of variability, did not give rise to heterogeneity among studies and produced a fixed-effects meta-analysis solution, which is deemed more suitable for compilations based on a small number of individual studies (n = 9). Because of the systematic approach of meta-analysis (i.e., individual studies are weighed according to precision) and its reliance for actual data, the output distribution of the relative risk effect size (~eN(−0.868,0.166)) merits consideration for inclusion in the chilling stage of quantitative risk assessments modeling the prevalence of this pathogen along the pork production chain.


BMJ Open ◽  
2016 ◽  
Vol 6 (8) ◽  
pp. e010983 ◽  
Author(s):  
Ji Cheng ◽  
Eleanor Pullenayegum ◽  
John K Marshall ◽  
Alfonso Iorio ◽  
Lehana Thabane

2021 ◽  
Author(s):  
Robbie Cornelis Maria van Aert ◽  
Jelte M. Wicherts

Outcome reporting bias (ORB) refers to the biasing effect caused by researchers selectively reporting outcomes based on their statistical significance. ORB leads to inflated average effect size estimates in a meta-analysis if only the outcome with the largest effect size is reported due to ORB. We propose a new method (CORB) to correct for ORB that includes an estimate of the variability of the outcomes' effect size as a moderator in a meta-regression model. An estimate of the variability of the outcomes' effect size can be computed by assuming a correlation among the outcomes. Results of a Monte-Carlo simulation study showed that effect size in meta-analyses may be severely overestimated without any correction for ORB. The CORB method accurately estimates effect size when overestimation caused by ORB is the largest. Applying the new method to a meta-analysis on the effect of playing violent video games on aggressive cognition showed that the average effect size estimate decreased when correcting for ORB. We recommend to routinely apply methods to correct for ORB in any meta-analysis. We provide annotated R code and functions to facilitate researchers to apply the CORB method.


2021 ◽  
Author(s):  
Francesca Giorgi

In the last ten years, scientific research has experienced an unprecedented “credibility’s crisis” of results. This means that researchers couldn't find the same results as in the original ones when conducting replication studies. In fact, the results showed that effects size were often not as strong as in the original studies and sometimes no effect was found. However, an important side-effect of the replicability crisis is that it increased the awareness of the problematic issues in the published literature and it promoted the development of new practices which would guarantee rigour, transparency and reproducibility. In the current work, the aim is to propose a new method to explore the inferential risks associated with each study in a meta-analysis. Specifically, this method is based on Design Analysis, a power analysis approach developed by @gelmanPowerCalculationsAssessing2014, which allows to analyse two other type of errors that are not commonly considered: the Type M (Magnitude) error and the Type S (Sign) error, concerning the magnitude and direction of the effects. We chose the Design Analysis approach because it allows to put more emphasis on the estimate of the effect size and it can be a valid tool available to researchers to make more conscious and informed decisions.


2018 ◽  
Vol 49 (5) ◽  
pp. 303-309 ◽  
Author(s):  
Jedidiah Siev ◽  
Shelby E. Zuckerman ◽  
Joseph J. Siev

Abstract. In a widely publicized set of studies, participants who were primed to consider unethical events preferred cleansing products more than did those primed with ethical events ( Zhong & Liljenquist, 2006 ). This tendency to respond to moral threat with physical cleansing is known as the Macbeth Effect. Several subsequent efforts, however, did not replicate this relationship. The present manuscript reports the results of a meta-analysis of 15 studies testing this relationship. The weighted mean effect size was small across all studies (g = 0.17, 95% CI [0.04, 0.31]), and nonsignificant across studies conducted in independent laboratories (g = 0.07, 95% CI [−0.04, 0.19]). We conclude that there is little evidence for an overall Macbeth Effect; however, there may be a Macbeth Effect under certain conditions.


Sign in / Sign up

Export Citation Format

Share Document