scholarly journals A likelihood ratio test for the homogeneity of between-study variance in network meta-analysis

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Dapeng Hu ◽  
Chong Wang ◽  
Annette M. O’Connor

Abstract Background Network meta-analysis (NMA) is a statistical method used to combine results from several clinical trials and simultaneously compare multiple treatments using direct and indirect evidence. Statistical heterogeneity is a characteristic describing the variability in the intervention effects being evaluated in the different studies in network meta-analysis. One approach to dealing with statistical heterogeneity is to perform a random effects network meta-analysis that incorporates a between-study variance into the statistical model. A common assumption in the random effects model for network meta-analysis is the homogeneity of between-study variance across all interventions. However, there are applications of NMA where the single between-study assumption is potentially incorrect and instead the model should incorporate more than one between-study variances. Methods In this paper, we develop an approach to testing the homogeneity of between-study variance assumption based on a likelihood ratio test. A simulation study was conducted to assess the type I error and power of the proposed test. This method is then applied to a network meta-analysis of antibiotic treatments for Bovine respiratory disease (BRD). Results The type I error rate was well controlled in the Monte Carlo simulation. We found statistical evidence (p value = 0.052) against the homogeneous between-study variance assumption in the network meta-analysis BRD. The point estimate and confidence interval of relative effect sizes are strongly influenced by this assumption. Conclusions Since homogeneous between-study variance assumption is a strong assumption, it is crucial to test the validity of this assumption before conducting a network meta-analysis. Here we propose and validate a method for testing this single between-study variance assumption which is widely used for many NMA.

2021 ◽  
Author(s):  
Dapeng Hu ◽  
Chong Wang ◽  
Annette O'Connor

Abstract Background: Network meta-analysis (NMA) is a statistical method used to combine results from several clinical trials and simultaneously compare multiple treatments using direct and indirect evidence. Statistical heterogeneity is a characteristic describing the variability in the intervention effects being evaluated in the different studies in network meta-analysis. One approach to dealing with statistical heterogeneity is to perform a random effects network meta-analysis that incorporates a between-study variance into the statistical model. A common assumption in the random effects model for network meta-analysis is the homogeneity of between-study variance across all interventions. However, there are applications of NMA where the single between-study assumption is potentially incorrect and instead the model should incorporate more than one between-study variances. Methods: In this paper, we develop an approach to testing the homogeneity of between-study variance assumption based on a likelihood ratio test. A simulation study was conducted to assess the type I error and power of the proposed test. This method is then applied to a network meta-analysis of antibiotic treatments for Bovine respiratory disease (BRD). Results: The type I error rate was well controlled in the Monte Carlo simulation. The homogeneous between-study variance assumption is unrealistic both statistically and practically in the network meta-analysis BRD. The point estimate and conffdence interval of relative effect sizes are strongly inuenced by this assumption. Conclusions: Since homogeneous between-study variance assumption is a strong assumption, it is crucial to test the validity of this assumption before conducting a network meta-analysis. Here we propose and validate a method for testing this single between-study variance assumption which is widely used for many NMA.


2016 ◽  
Vol 38 (4) ◽  
Author(s):  
Rainer W. Alexandrowicz

One important tool for assessing whether a data set can be described equally well with a Rasch Model (RM) or a Linear Logistic Test Model (LLTM) is the Likelihood Ratio Test (LRT). In practical applications this test seems to overly reject the null hypothesis, even when the null hypothesis is true. Aside from obvious reasons like inadequate restrictiveness of linear restrictions formulated in the LLTM or the RM not being true, doubts have arisen whether the test holds the nominal type-I error risk, that is whether its theoretically derived sampling distribution applies. Therefore, the present contribution explores the sampling distribution of the likelihood ratio test comparing a Rasch model with a Linear Logistic Test Model. Particular attention is put on the issue of similar columns in the weight matrixW of the LLTM: Although full column rank of this matrix is a technical requirement, columns can differ in only a few entries, what in turn might have an impact on the sampling distribution of the test statistic. Therefore, a system of how to generate weight matrices with similar columns has been established and tested in a simulation study. The results were twofold: In general, the matricesconsidered in the study showed LRT results where the empirical alpha showed only spurious deviations from the nominal alpha. Hence the theoretically chosen alpha seems maintained up to random variation. Yet, one specific matrix clearly indicated a highly increased type-I error risk: The empirical alpha was at least twice the nominal alpha when using this weight matrix. This shows that we have to indeed consider the internal structure of the weight matrix when applying the LRT for testing the LLTM. Best practice would be to perform a simulation or bootstrap/re-sampling study for the weight matrix under consideration in order to rule out a misleadingly significant result due to reasons other than true model misfit.


Methodology ◽  
2012 ◽  
Vol 8 (4) ◽  
pp. 134-145 ◽  
Author(s):  
Fabiola González-Betanzos ◽  
Francisco J. Abad

The current research compares the effects of several strategies to establish the anchor subtest when detecting for differential item functioning (DIF) using the IRT likelihood ratio test in one- and two-stage procedures. Two one-stage strategies were examined: (1) “One item” and (2) “All other items” used as anchor. Additionally, two two-stage strategies were tested: (3) “One anchor item with posterior anchor test augmentation” and (4) “All other items with purification.” The strategies were compared in a simulation study, where sample sizes, DIF size, type of DIF, and software implementation (MULTILOG vs. IRTLRDIF) were manipulated. Results indicated that Procedure (1) was more efficient than (2). Purification was found to improve Type I error rates substantially with the “all other items” strategy, while “posterior anchor test augmentation” did not yield a significant improvement. In relation to the effect of the software used, we found that MULTILOG generally offers better results than IRTLRDIF.


2017 ◽  
Vol 41 (6) ◽  
pp. 403-421 ◽  
Author(s):  
Sandip Sinharay

Benefiting from item preknowledge is a major type of fraudulent behavior during educational assessments. Belov suggested the posterior shift statistic for detection of item preknowledge and showed its performance to be better on average than that of seven other statistics for detection of item preknowledge for a known set of compromised items. Sinharay suggested a statistic based on the likelihood ratio test for detection of item preknowledge; the advantage of the statistic is that its null distribution is known. Results from simulated and real data and adaptive and nonadaptive tests are used to demonstrate that the Type I error rate and power of the statistic based on the likelihood ratio test are very similar to those of the posterior shift statistic. Thus, the statistic based on the likelihood ratio test appears promising in detecting item preknowledge when the set of compromised items is known.


2016 ◽  
Vol 46 (7) ◽  
pp. 1158-1164
Author(s):  
Betania Brum ◽  
Sidinei José Lopes ◽  
Daniel Furtado Ferreira ◽  
Lindolfo Storck ◽  
Alberto Cargnelutti Filho

ABSTRACT: The likelihood ratio test (LRT), to the independence between two sets of variables, allows to identify whether there is a dependency relationship between them. The aim of this study was to calculate the type I error and power of the LRT for determining independence between two sets of variables under multivariate normal distributions in scenarios consisting of combinations of 16 sample sizes; 40 combinations of the number of variables of the two groups; and nine degrees of correlation between the variables (for the power). The rate of type I error and power were calculate at 640 and 5,760 scenarios, respectively. A performance evaluation of the LRT was conducted by computer simulation by the Monte Carlo method, using 2,000 simulations in each scenario. When the number of variables was large (24), the TRV controlled the rate of type I errors and showed high power in sizes greater than 100 samples. For small sample sizes (25, 30 and 50), the test showed good performance because the number of variables did not exceed 12.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Lan Huang ◽  
Jyoti Zalkikar ◽  
Ram Tiwari

Pre- and postmarket drug safety evaluations usually include an integrated summary of results obtained using data from multiple studies related to a drug of interest. This paper proposes three approaches based on the likelihood ratio test (LRT), called the LRT methods, for drug safety signal detection from large observational databases with multiple studies, with focus on identifying signals of adverse events (AEs) from many AEs associated with a particular drug or inversely for signals of drugs associated with a particular AE. The methods discussed include simple pooled LRT method and its variations such as the weighted LRT that incorporates the total drug exposure information by study. The power and type-I error of the LRT methods are evaluated in a simulation study with varying heterogeneity across studies. For illustration purpose, these methods are applied to Proton Pump Inhibitors (PPIs) data with 6 studies for the effect of concomitant use of PPIs in treating patients with osteoporosis and to Lipiodol (a contrast agent) data with 13 studies for evaluating that drug’s safety profiles.


2020 ◽  
Vol 29 (12) ◽  
pp. 3666-3683
Author(s):  
Dominic Edelmann ◽  
Maral Saadati ◽  
Hein Putter ◽  
Jelle Goeman

Standard tests for the Cox model, such as the likelihood ratio test or the Wald test, do not perform well in situations, where the number of covariates is substantially higher than the number of observed events. This issue is perpetuated in competing risks settings, where the number of observed occurrences for each event type is usually rather small. Yet, appropriate testing methodology for competing risks survival analysis with few events per variable is missing. In this article, we show how to extend the global test for survival by Goeman et al. to competing risks and multistate models[Per journal style, abstracts should not have reference citations. Therefore, can you kindly delete this reference citation.]. Conducting detailed simulation studies, we show that both for type I error control and for power, the novel test outperforms the likelihood ratio test and the Wald test based on the cause-specific hazards model in settings where the number of events is small compared to the number of covariates. The benefit of the global tests for competing risks survival analysis and multistate models is further demonstrated in real data examples of cancer patients from the European Society for Blood and Marrow Transplantation.


Author(s):  
Georg Krammer

The Andersen LRT uses sample characteristics as split criteria to evaluate Rasch model fit, or theory driven hypothesis testing for a test. The power and Type I error of a random split criterion was evaluated with a simulation study. Results consistently show a random split criterion lacks power.


Sign in / Sign up

Export Citation Format

Share Document