scholarly journals Maximum likelihood estimation based on Newton–Raphson iteration for the bivariate random effects model in test accuracy meta-analysis

2019 ◽  
Vol 29 (4) ◽  
pp. 1197-1211
Author(s):  
Brian H Willis ◽  
Mohammed Baragilly ◽  
Dyuti Coomar

A bivariate generalised linear mixed model is often used for meta-analysis of test accuracy studies. The model is complex and requires five parameters to be estimated. As there is no closed form for the likelihood function for the model, maximum likelihood estimates for the parameters have to be obtained numerically. Although generic functions have emerged which may estimate the parameters in these models, they remain opaque to many. From first principles we demonstrate how the maximum likelihood estimates for the parameters may be obtained using two methods based on Newton–Raphson iteration. The first uses the profile likelihood and the second uses the Observed Fisher Information. As convergence may depend on the proximity of the initial estimates to the global maximum, each algorithm includes a method for obtaining robust initial estimates. A simulation study was used to evaluate the algorithms and compare their performance with the generic generalised linear mixed model function glmer from the lme4 package in R before applying them to two meta-analyses from the literature. In general, the two algorithms had higher convergence rates and coverage probabilities than glmer. Based on its performance characteristics the method of profiling is recommended for fitting the bivariate generalised linear mixed model for meta-analysis.

2010 ◽  
Vol 49 (01) ◽  
pp. 54-64 ◽  
Author(s):  
J. Menke

Summary Objectives: Meta-analysis allows to summarize pooled sensitivities and specificities from several primary diagnostic test accuracy studies. Often these pooled estimates are indirectly obtained from a hierarchical summary receiver operating characteristics (HSROC) analysis. This article presents a generalized linear random-effects model with the new SAS PROC GLIMMIX that obtains the pooled estimates for sensitivity and specificity directly. Methods: Firstly, the formula of the bivariate random-effects model is presented in context with the literature. Then its implementation with the new SAS PROC GLIMMIX is empirically evaluated in comparison to the indirect HSROC approach, utilizing the published 2 x 2 count data of 50 meta-analyses. Results: According to the empirical evaluation the meta-analytic results from the bivariate GLIMMIX approach are nearly identical to the results from the indirect HSROC approach. Conclusions: A generalized linear mixed model with PROC GLIMMIX offers a straightforward method for bivariate random-effects meta-analysis of sensitivity and specificity.


2018 ◽  
Vol 28 (10-11) ◽  
pp. 3286-3300 ◽  
Author(s):  
Aristidis K Nikoloulopoulos

For a particular disease, there may be two diagnostic tests developed, where each of the tests is subject to several studies. A quadrivariate generalised linear mixed model (GLMM) has been recently proposed to joint meta-analyse and compare two diagnostic tests. We propose a D-vine copula mixed model for joint meta-analysis and comparison of two diagnostic tests. Our general model includes the quadrivariate GLMM as a special case and can also operate on the original scale of sensitivities and specificities. The method allows the direct calculation of sensitivity and specificity for each test, as well as the parameters of the summary receiver operator characteristic (SROC) curve, along with a comparison between the SROCs of each test. Our methodology is demonstrated with an extensive simulation study and illustrated by meta-analysing two examples where two tests for the diagnosis of a particular disease are compared. Our study suggests that there can be an improvement on GLMM in fit to data since our model can also provide tail dependencies and asymmetries.


2018 ◽  
Author(s):  
Daniel W. Heck

To facilitate the interpretation of systematic mean differences in within-subject designs, Nathoo, Kilshaw, and Masson (2018, Journal of Mathematical Psychology, 86, 1-9) proposed a Bayesian within-subject highest-density interval (HDI). However, their approach rests on independent maximum-likelihood estimates for the random effects which do not take estimation uncertainty and shrinkage into account. I propose an extension of Nathoo et al.'s method using a fully Bayesian, two-step approach. First, posterior samples are drawn for the linear mixed model. Second, the within-subject HDI is computed repeatedly based on the posterior samples, thereby accounting for estimation uncertainty and shrinkage. After marginalizing over the posterior distribution, the two-step approach results in a Bayesian within-subject HDI with a width similar to that of the classical within-subject confidence interval proposed by Loftus and Masson (1994, Psychonomic Bulletin & Review, 1, 476-490).


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Aristidis K. Nikoloulopoulos

AbstractA recent paper proposed an extended trivariate generalized linear mixed model (TGLMM) for synthesis of diagnostic test accuracy studies in the presence of non-evaluable index test results. Inspired by the aforementioned model we propose an extended trivariate vine copula mixed model that includes the TGLMM as special case, but can also operate on the original scale of sensitivity, specificity, and disease prevalence. The performance of the proposed vine copula mixed model is examined by extensive simulation studies in comparison with the TGLMM. Simulation studies showed that the TGLMM leads to biased meta-analytic estimates of sensitivity, specificity, and prevalence when the univariate random effects are misspecified. The vine copula mixed model gives nearly unbiased estimates of test accuracy indices and disease prevalence. Our general methodology is illustrated by meta-analysing coronary CT angiography studies.


2021 ◽  
Vol 50 (Supplement_1) ◽  
pp. i12-i42
Author(s):  
M B Zazzara ◽  
P M Wells ◽  
R C E Bowyer ◽  
M N Lochlainn ◽  
E J Thompson ◽  
...  

Abstract Introduction Periodontitis is a chronic inflammatory disease affecting the periodontium, ultimately leading to looseness and/or loss of teeth. Sarcopenia refers to age-related reduction in muscle mass and strength. Similar to periodontitis, chronic low-grade inflammation is thought to play a key role in its development. In addition, both increase in prevalence with advancing age. Despite known associations with other diseases involving a dysregulated inflammatory response, for example rheumatoid arthritis,, the relationship between periodontitis and sarcopenia, and whether they could be driven by similar processes, remains uncertain. The aim of this study was to explore the association between periodontitis and sarcopenia. Methods Observational study of 2040 adult volunteers [age 67.18 (12.17)] enrolled in the TwinsUK cohort study. Presence of tooth mobility and number of teeth lost were used to assess periodontal health. A binary variable was created to define periodontitis. Measurements of muscle strength, muscle quality/quantity and physical performance were used to assess sarcopenia. A categorical variable was created according to the European Working Group on Sarcopenia in Older People (EWGSOP2) consensus, to define sarcopenia (1: probable; 2: positive; 3: severe). Generalised linear mixed model analysis used on complete cases and age-matched (n = 1,288) samples to ascertain associations between periodontitis and sarcopenia. Results No significant association was found between periodontitis and sarcopenia in both the complete cases analysis and age-matched analysis. Results were consistent when analysis was adjusted for potential confounders including body mass index, frailty index, Mini Mental State Examination smoking, nutritional status and educational level. Conclusions This study found no significant association between periodontitis and sarcopenia in a cohort of 2040 adults. Although both periodontitis and sarcopenia have been linked to a dysregulated immune response and demonstrate an increase in prevalence with increasing age, our work is inconclusive due to the plethora of possible aetiopathogenetic pathways.


2020 ◽  
pp. 1471082X2096691
Author(s):  
Amani Almohaimeed ◽  
Jochen Einbeck

Random effect models have been popularly used as a mainstream statistical technique over several decades; and the same can be said for response transformation models such as the Box–Cox transformation. The latter aims at ensuring that the assumptions of normality and of homoscedasticity of the response distribution are fulfilled, which are essential conditions for inference based on a linear model or a linear mixed model. However, methodology for response transformation and simultaneous inclusion of random effects has been developed and implemented only scarcely, and is so far restricted to Gaussian random effects. We develop such methodology, thereby not requiring parametric assumptions on the distribution of the random effects. This is achieved by extending the ‘Nonparametric Maximum Likelihood’ towards a ‘Nonparametric profile maximum likelihood’ technique, allowing to deal with overdispersion as well as two-level data scenarios.


2014 ◽  
Vol 54 (10) ◽  
pp. 1853 ◽  
Author(s):  
N. G. McPhail ◽  
J. L. Stark ◽  
A. J. Ball ◽  
R. D. Warner

Chilled lamb meat exported from Australia has, on occasions, been rejected by importing countries due to greening, after only 6 weeks of storage time. Greening is known to be more prevalent in high ultimate pH (pHu) beef meat (>5.9). There are few data available for lamb carcasses in Australia on the occurrence of high pHu meat, which may have an impact on the understanding and control of quality and greening during storage. The aim of this project was to determine the prevalence of, and influencing factors for, high pHu meat in a range of muscle types in lamb carcasses in Australia. Muscle pHu data were collected from a total of 1614 carcasses from 78 lots at four lamb processing plants in Victoria and New South Wales in autumn and spring of 2013. The pHu of the knuckle (rectus femoris), rack (longissimus) and blade (infraspinatus) was measured and data on carcass and lot characteristics were recorded. Data were subjected to restricted maximum likelihood and generalised linear mixed model analysis. The mean pHu of the knuckle, rack and shoulder were 6.06, 5.79 and 6.12 respectively, and the main factors influencing muscle pHu and occurrence of dark-cutting were breed, season, electrical stimulation and carcass weight. Merino lambs had a higher pHu in the blade and knuckle than did other breeds (P < 0.05, P < 0.01 respectively). Lambs processed in autumn had a higher predicted pHu in the blade and knuckle and a higher percentage dark-cutting (DC; pHu >6.0) for those muscles, than did those processed in spring (P < 0.05). Carcasses that had been electrically stimulated had a higher %DC and a higher pHu in all three muscles (P < 0.05). Carcass weight had a significant effect on the pHu of all three muscles (P < 0.001), with heavier carcasses having a lower pHu and lower %DC. The pHu of the rack was not a reliable predictor for the pHu in other muscles of the lamb carcass. In conclusion, the high occurrence of DC in the muscles, particularly the blade and knuckle, suggests that these muscles may be at risk for producing greening in the vacuum bag during storage.


2018 ◽  
Vol 147 ◽  
Author(s):  
A. Aswi ◽  
S. M. Cramb ◽  
P. Moraga ◽  
K. Mengersen

AbstractDengue fever (DF) is one of the world's most disabling mosquito-borne diseases, with a variety of approaches available to model its spatial and temporal dynamics. This paper aims to identify and compare the different spatial and spatio-temporal Bayesian modelling methods that have been applied to DF and examine influential covariates that have been reportedly associated with the risk of DF. A systematic search was performed in December 2017, using Web of Science, Scopus, ScienceDirect, PubMed, ProQuest and Medline (via Ebscohost) electronic databases. The search was restricted to refereed journal articles published in English from January 2000 to November 2017. Thirty-one articles met the inclusion criteria. Using a modified quality assessment tool, the median quality score across studies was 14/16. The most popular Bayesian statistical approach to dengue modelling was a generalised linear mixed model with spatial random effects described by a conditional autoregressive prior. A limited number of studies included spatio-temporal random effects. Temperature and precipitation were shown to often influence the risk of dengue. Developing spatio-temporal random-effect models, considering other priors, using a dataset that covers an extended time period, and investigating other covariates would help to better understand and control DF transmission.


Sign in / Sign up

Export Citation Format

Share Document