The equivalence between likelihood ratio test and F-test for testing variance component in a balanced one-way random effects model

2010 ◽  
Vol 80 (4) ◽  
pp. 443-450 ◽  
Author(s):  
Yan Lu ◽  
Guoyi Zhang
2021 ◽  
Author(s):  
Dapeng Hu ◽  
Chong Wang ◽  
Annette O'Connor

Abstract Background: Network meta-analysis (NMA) is a statistical method used to combine results from several clinical trials and simultaneously compare multiple treatments using direct and indirect evidence. Statistical heterogeneity is a characteristic describing the variability in the intervention effects being evaluated in the different studies in network meta-analysis. One approach to dealing with statistical heterogeneity is to perform a random effects network meta-analysis that incorporates a between-study variance into the statistical model. A common assumption in the random effects model for network meta-analysis is the homogeneity of between-study variance across all interventions. However, there are applications of NMA where the single between-study assumption is potentially incorrect and instead the model should incorporate more than one between-study variances. Methods: In this paper, we develop an approach to testing the homogeneity of between-study variance assumption based on a likelihood ratio test. A simulation study was conducted to assess the type I error and power of the proposed test. This method is then applied to a network meta-analysis of antibiotic treatments for Bovine respiratory disease (BRD). Results: The type I error rate was well controlled in the Monte Carlo simulation. The homogeneous between-study variance assumption is unrealistic both statistically and practically in the network meta-analysis BRD. The point estimate and conffdence interval of relative effect sizes are strongly inuenced by this assumption. Conclusions: Since homogeneous between-study variance assumption is a strong assumption, it is crucial to test the validity of this assumption before conducting a network meta-analysis. Here we propose and validate a method for testing this single between-study variance assumption which is widely used for many NMA.


1999 ◽  
Vol 65 (2) ◽  
pp. 531-544 ◽  
Author(s):  
David B. Allison ◽  
Michael C. Neale ◽  
Raffaella Zannolli ◽  
Nicholas J. Schork ◽  
Christopher I. Amos ◽  
...  

2018 ◽  
Author(s):  
Jing Zhai ◽  
Kenneth Knox ◽  
Homer L. Twigg ◽  
Hua Zhou ◽  
Jin J. Zhou

SummaryIn the metagenomics studies, testing the association of microbiome composition and clinical conditions translates to testing the nullity of variance components. Computationally efficient score tests have been the major tools. But they can only apply to the null hypothesis with a single variance component and when sample sizes are large. Therefore, they are not applicable to longitudinal microbiome studies. In this paper, we propose exact tests (score test, likelihood ratio test, and restricted likelihood ratio test) to solve the problems of (1) testing the association of the overall microbiome composition in a longitudinal design and (2) detecting the association of one specific microbiome cluster while adjusting for the effects from related clusters. Our approach combines the exact tests for null hypothesis with a single variance component with a strategy of reducing multiple variance components to a single one. Simulation studies demonstrate that our method has correct type I error rate and superior power compared to existing methods at small sample sizes and weak signals. Finally, we apply our method to a longitudinal pulmonary microbiome study of human immunodeficiency virus (HIV) infected patients and reveal two interesting genera Prevotella and Veillonella associated with forced vital capacity. Our findings shed lights on the impact of lung microbiome to HIV complexities. The method is implemented in the open source, high-performance computing language Julia and is freely available at https://github.com/JingZhai63/VCmicrobiome.


Author(s):  
Evgeniia S. Chetvertakova ◽  
◽  
Ekaterina V. Chimitova ◽  

This paper considers the Wiener degradation model with random effects. Random-effect models take into account the unit-to-unit variability of the degradation index. It is assumed that a random parameter has a truncated normal distribution. During the research, the expression for the maximum likelihood estimates and the reliability function has been obtained. Two statistical tests have been proposed to reveal the existence of random effects in degradation data corresponding to the Wiener degradation model. The first test is a well-known likelihood ratio test, and the second one is based on the variance estimate of the random parameter. These tests have been compared in terms of power with the Monte-Carlo simulation method. The result of the research has shown that the criterion based on the variance estimate of the random parameter is more powerful than the likelihood ratio test in the case of the considered pairs of competing hypotheses. An example of the analysis using the proposed tests for the turbofan engine degradation data has been considered. The data set includes the measurements recorded from 18 sensors for 100 engines. Before constructing the degradation model, the single degradation index has been obtained using the principal component method. The hypothesis of the random effect insignificance in the model has been rejected for both tests. It has been shown that the random-effect Wiener degradation model describes the failure time distribution more accurately than the fixed-effect Wiener degradation model.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Dapeng Hu ◽  
Chong Wang ◽  
Annette M. O’Connor

Abstract Background Network meta-analysis (NMA) is a statistical method used to combine results from several clinical trials and simultaneously compare multiple treatments using direct and indirect evidence. Statistical heterogeneity is a characteristic describing the variability in the intervention effects being evaluated in the different studies in network meta-analysis. One approach to dealing with statistical heterogeneity is to perform a random effects network meta-analysis that incorporates a between-study variance into the statistical model. A common assumption in the random effects model for network meta-analysis is the homogeneity of between-study variance across all interventions. However, there are applications of NMA where the single between-study assumption is potentially incorrect and instead the model should incorporate more than one between-study variances. Methods In this paper, we develop an approach to testing the homogeneity of between-study variance assumption based on a likelihood ratio test. A simulation study was conducted to assess the type I error and power of the proposed test. This method is then applied to a network meta-analysis of antibiotic treatments for Bovine respiratory disease (BRD). Results The type I error rate was well controlled in the Monte Carlo simulation. We found statistical evidence (p value = 0.052) against the homogeneous between-study variance assumption in the network meta-analysis BRD. The point estimate and confidence interval of relative effect sizes are strongly influenced by this assumption. Conclusions Since homogeneous between-study variance assumption is a strong assumption, it is crucial to test the validity of this assumption before conducting a network meta-analysis. Here we propose and validate a method for testing this single between-study variance assumption which is widely used for many NMA.


1997 ◽  
Vol 61 (4) ◽  
pp. 335-350 ◽  
Author(s):  
A. P. MORRIS ◽  
J. C. WHITTAKER ◽  
R. N. CURNOW

Sign in / Sign up

Export Citation Format

Share Document