A Simplification and Implementation of Random-effects Meta-analyses Based on the Exact Distribution of Cochran’s Q

2014 ◽  
Vol 53 (01) ◽  
pp. 54-61 ◽  
Author(s):  
M. Preuß ◽  
A. Ziegler

SummaryBackground: The random-effects (RE) model is the standard choice for meta-analysis in the presence of heterogeneity, and the stand ard RE method is the DerSimonian and Laird (DSL) approach, where the degree of heterogeneity is estimated using a moment-estimator. The DSL approach does not take into account the variability of the estimated heterogeneity variance in the estimation of Cochran’s Q. Biggerstaff and Jackson derived the exact cumulative distribution function (CDF) of Q to account for the variability of Ť 2.Objectives: The first objective is to show that the explicit numerical computation of the density function of Cochran’s Q is not required. The second objective is to develop an R package with the possibility to easily calculate the classical RE method and the new exact RE method.Methods: The novel approach was validated in extensive simulation studies. The different approaches used in the simulation studies, including the exact weights RE meta-analysis, the I 2 and T 2 estimates together with their confidence intervals were implemented in the R package metaxa.Results: The comparison with the classical DSL method showed that the exact weights RE meta-analysis kept the nominal type I error level better and that it had greater power in case of many small studies and a single large study. The Hedges RE approach had inflated type I error levels. Another advantage of the exact weights RE meta-analysis is that an exact confidence interval for T 2is readily available. The exact weights RE approach had greater power in case of few studies, while the restricted maximum likelihood (REML) approach was superior in case of a large number of studies. Differences between the exact weights RE meta-analysis and the DSL approach were observed in the re-analysis of real data sets. Application of the exact weights RE meta-analysis, REML, and the DSL approach to real data sets showed that conclusions between these methods differed.Conclusions: The simplification does not require the calculation of the density of Cochran’s Q, but only the calculation of the cumulative distribution function, while the previous approach required the computation of both the density and the cumulative distribution function. It thus reduces computation time, improves numerical stability, and reduces the approximation error in meta-analysis. The different approaches, including the exact weights RE meta-analysis, the I 2 and T 2estimates together with their confidence intervals are available in the R package metaxa, which can be used in applications.

2018 ◽  
Vol 20 (6) ◽  
pp. 2055-2065 ◽  
Author(s):  
Johannes Brägelmann ◽  
Justo Lorenzo Bermejo

Abstract Technological advances and reduced costs of high-density methylation arrays have led to an increasing number of association studies on the possible relationship between human disease and epigenetic variability. DNA samples from peripheral blood or other tissue types are analyzed in epigenome-wide association studies (EWAS) to detect methylation differences related to a particular phenotype. Since information on the cell-type composition of the sample is generally not available and methylation profiles are cell-type specific, statistical methods have been developed for adjustment of cell-type heterogeneity in EWAS. In this study we systematically compared five popular adjustment methods: the factored spectrally transformed linear mixed model (FaST-LMM-EWASher), the sparse principal component analysis algorithm ReFACTor, surrogate variable analysis (SVA), independent SVA (ISVA) and an optimized version of SVA (SmartSVA). We used real data and applied a multilayered simulation framework to assess the type I error rate, the statistical power and the quality of estimated methylation differences according to major study characteristics. While all five adjustment methods improved false-positive rates compared with unadjusted analyses, FaST-LMM-EWASher resulted in the lowest type I error rate at the expense of low statistical power. SVA efficiently corrected for cell-type heterogeneity in EWAS up to 200 cases and 200 controls, but did not control type I error rates in larger studies. Results based on real data sets confirmed simulation findings with the strongest control of type I error rates by FaST-LMM-EWASher and SmartSVA. Overall, ReFACTor, ISVA and SmartSVA showed the best comparable statistical power, quality of estimated methylation differences and runtime.


2021 ◽  
Vol 10 (12) ◽  
pp. 3679-3697
Author(s):  
N. Almi ◽  
A. Sayah

In this paper, two kernel cumulative distribution function estimators are introduced and investigated in order to improve the boundary effects, we will restrict our attention to the right boundary. The first estimator uses a self-elimination between modify theoretical Bias term and the classical kernel estimator itself. The basic technique of construction the second estimator is kind of a generalized reflection method involving reflection a transformation of the observed data. The theoretical properties of our estimators turned out that the Bias has been reduced to the second power of the bandwidth, simulation studies and two real data applications were carried out to check these phenomena and are conducted that the proposed estimators are better than the existing boundary correction methods.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 682
Author(s):  
Josmar Mazucheli ◽  
Víctor Leiva ◽  
Bruna Alves ◽  
André F. B. Menezes

Quantile regression provides a framework for modeling the relationship between a response variable and covariates using the quantile function. This work proposes a regression model for continuous variables bounded to the unit interval based on the unit Birnbaum–Saunders distribution as an alternative to the existing quantile regression models. By parameterizing the unit Birnbaum–Saunders distribution in terms of its quantile function allows us to model the effect of covariates across the entire response distribution, rather than only at the mean. Our proposal, especially useful for modeling quantiles using covariates, in general outperforms the other competing models available in the literature. These findings are supported by Monte Carlo simulations and applications using two real data sets. An R package, including parameter estimation, model checking as well as density, cumulative distribution, quantile and random number generating functions of the unit Birnbaum–Saunders distribution was developed and can be readily used to assess the suitability of our proposal.


2020 ◽  
Vol 28 (3) ◽  
pp. 227-236
Author(s):  
Logan Opperman ◽  
Wei Ning

AbstractIn this paper, we propose a goodness-of-fit test based on the energy statistic for skew normality. Simulations indicate that the Type-I error of the proposed test can be controlled reasonably well for given nominal levels. Power comparisons to other existing methods under different settings show the advantage of the proposed test. Such a test is applied to two real data sets to illustrate the testing procedure.


2021 ◽  
Vol 23 (3) ◽  
Author(s):  
Estelle Chasseloup ◽  
Adrien Tessier ◽  
Mats O. Karlsson

AbstractLongitudinal pharmacometric models offer many advantages in the analysis of clinical trial data, but potentially inflated type I error and biased drug effect estimates, as a consequence of model misspecifications and multiple testing, are main drawbacks. In this work, we used real data to compare these aspects for a standard approach (STD) and a new one using mixture models, called individual model averaging (IMA). Placebo arm data sets were obtained from three clinical studies assessing ADAS-Cog scores, Likert pain scores, and seizure frequency. By randomly (1:1) assigning patients in the above data sets to “treatment” or “placebo,” we created data sets where any significant drug effect was known to be a false positive. Repeating the process of random assignment and analysis for significant drug effect many times (N = 1000) for each of the 40 to 66 placebo-drug model combinations, statistics of the type I error and drug effect bias were obtained. Across all models and the three data types, the type I error was (5th, 25th, 50th, 75th, 95th percentiles) 4.1, 11.4, 40.6, 100.0, 100.0 for STD, and 1.6, 3.5, 4.3, 5.0, 6.0 for IMA. IMA showed no bias in the drug effect estimates, whereas in STD bias was frequently present. In conclusion, STD is associated with inflated type I error and risk of biased drug effect estimates. IMA demonstrated controlled type I error and no bias.


2020 ◽  
Vol 7 (1) ◽  
pp. 4
Author(s):  
Nikolay V. Kyurkchiev

The cumulative distribution function (cdf) corresponding to the 'four parameter extended type I half--logistic modified Weibull (TIHLMW) distribution'' is ...


2019 ◽  
Vol 227 (1) ◽  
pp. 83-89 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. The funnel plot is widely used in meta-analyses to assess potential publication bias. However, experimental evidence suggests that informal, mere visual, inspection of funnel plots is frequently prone to incorrect conclusions, and formal statistical tests (Egger regression and others) entirely focus on funnel plot asymmetry. We suggest using the visual inference framework with funnel plots routinely, including for didactic purposes. In this framework, the type I error is controlled by design, while the explorative, holistic, and open nature of visual graph inspection is preserved. Specifically, the funnel plot of the actually observed data is presented simultaneously, in a lineup, with null funnel plots showing data simulated under the null hypothesis. Only when the real data funnel plot is identifiable from all the funnel plots presented, funnel plot-based conclusions might be warranted. Software to implement visual funnel plot inference is provided via a tailored R function.


2019 ◽  
Vol 189 (3) ◽  
pp. 235-242 ◽  
Author(s):  
Chia-Chun Wang ◽  
Wen-Chung Lee

Abstract Random-effects meta-analysis is one of the mainstream methods for research synthesis. The heterogeneity in meta-analyses is usually assumed to follow a normal distribution. This is actually a strong assumption, but one that often receives little attention and is used without justification. Although methods for assessing the normality assumption are readily available, they cannot be used directly because the included studies have different within-study standard errors. Here we present a standardization framework for evaluation of the normality assumption and examine its performance in random-effects meta-analyses with simulation studies and real examples. We use both a formal statistical test and a quantile-quantile plot for visualization. Simulation studies show that our normality test has well-controlled type I error rates and reasonable power. We also illustrate the real-world significance of examining the normality assumption with examples. Investigating the normality assumption can provide valuable information for further analysis or clinical application. We recommend routine examination of the normality assumption with the proposed framework in future meta-analyses.


Symmetry ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 899 ◽  
Author(s):  
Yolanda M. Gómez ◽  
Emilio Gómez-Déniz ◽  
Osvaldo Venegas ◽  
Diego I. Gallardo ◽  
Héctor W. Gómez

In this article, we study an extension of the sinh Cauchy model in order to obtain asymmetric bimodality. The behavior of the distribution may be either unimodal or bimodal. We calculate its cumulative distribution function and use it to carry out quantile regression. We calculate the maximum likelihood estimators and carry out a simulation study. Two applications are analyzed based on real data to illustrate the flexibility of the distribution for modeling unimodal and bimodal data.


2020 ◽  
Author(s):  
Han Du ◽  
Ge Jiang ◽  
Zijun Ke

Meta-analysis combines pertinent information from existing studies to provide an overall estimate of population parameters/effect sizes, as well as to quantify and explain the differences between studies. However, testing the between-study heterogeneity is one of the most troublesome topics in meta-analysis research. Additionally, no methods have been proposed to test whether the size of the heterogeneity is larger than a specific level. The existing methods, such as the Q test and likelihood ratio (LR) tests, are criticized for their failure to control the Type I error rate and/or failure to attain enough statistical power. Although better reference distribution approximations have been proposed in the literature, the expression is complicated and the application is limited. In this article, we propose bootstrap based heterogeneity tests combining the restricted maximum likelihood (REML) ratio test or Q test with bootstrap procedures, denoted as B-REML-LRT and B-Q respectively. Simulation studies were conducted to examine and compare the performance of the proposed methods with the regular LR tests, the regular Q test, and the improved Q test in both the random-effects meta-analysis and mixed-effects meta-analysis. Based on the results of Type I error rates and statistical power, B-Q is recommended. An R package \mathtt{boot.heterogeneity} is provided to facilitate the implementation of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document