Joint modeling of longitudinal and survival data with a covariate subject to a limit of detection

2017 ◽  
Vol 28 (2) ◽  
pp. 486-502 ◽  
Author(s):  
Abdus Sattar ◽  
Sanjoy K Sinha

We develop and study an innovative method for jointly modeling longitudinal response and time-to-event data with a covariate subject to a limit of detection. The joint model assumes a latent process based on random effects to describe the association between longitudinal and time-to-event data. We study the role of the association parameter on the regression parameters estimators. We model the longitudinal and survival outcomes using linear mixed-effects and Weibull frailty models, respectively. Because of the limit of detection, missing covariate (explanatory variable, x) values may lead to the non-ignorable missing, resulting in biased parameter estimates with poor coverage probabilities of the confidence interval. We define and estimate the probability of missing due to the limit of detection. Then we develop a novel joint density and hence the likelihood function that incorporates the effect of left-censored covariate. Monte Carlo simulations show that the estimators of the proposed method are approximately unbiased and provide expected coverage probabilities for both longitudinal and survival submodels parameters. We also present an application of the proposed method using a large clinical dataset of pneumonia patients obtained from the Genetic and Inflammatory Markers of Sepsis study.

2016 ◽  
Vol 27 (4) ◽  
pp. 1258-1270 ◽  
Author(s):  
Huirong Zhu ◽  
Stacia M DeSantis ◽  
Sheng Luo

Longitudinal zero-inflated count data are encountered frequently in substance-use research when assessing the effects of covariates and risk factors on outcomes. Often, both the time to a terminal event such as death or dropout and repeated measure count responses are collected for each subject. In this setting, the longitudinal counts are censored by the terminal event, and the time to the terminal event may depend on the longitudinal outcomes. In the study described herein, we expand the class of joint models for longitudinal and survival data to accommodate zero-inflated counts and time-to-event data by using a Cox proportional hazards model with piecewise constant baseline hazard. We use a Bayesian framework via Markov chain Monte Carlo simulations implemented in the BUGS programming language. Via an extensive simulation study, we apply the joint model and obtain estimates that are more accurate than those of the corresponding independence model. We apply the proposed method to an alpha-tocopherol, beta-carotene lung cancer prevention study.


2021 ◽  
pp. 096228022110028
Author(s):  
T Baghfalaki ◽  
M Ganjali

Joint modeling of zero-inflated count and time-to-event data is usually performed by applying the shared random effect model. This kind of joint modeling can be considered as a latent Gaussian model. In this paper, the approach of integrated nested Laplace approximation (INLA) is used to perform approximate Bayesian approach for the joint modeling. We propose a zero-inflated hurdle model under Poisson or negative binomial distributional assumption as sub-model for count data. Also, a Weibull model is used as survival time sub-model. In addition to the usual joint linear model, a joint partially linear model is also considered to take into account the non-linear effect of time on the longitudinal count response. The performance of the method is investigated using some simulation studies and its achievement is compared with the usual approach via the Bayesian paradigm of Monte Carlo Markov Chain (MCMC). Also, we apply the proposed method to analyze two real data sets. The first one is the data about a longitudinal study of pregnancy and the second one is a data set obtained of a HIV study.


2021 ◽  
Author(s):  
Chongliang Luo ◽  
Rui Duan ◽  
Yong Chen

Objective: We developed and evaluated a privacy-preserving One-shot Distributed Algorithm for Cox model to analyze multi-center time-to-event data without sharing patient-level information across sites, while accounting for heterogeneity across sites by allowing site-specific baseline hazard functions and feature distributions. Materials and Methods: We constructed a surrogate likelihood function to approximate the Cox log partial likelihood function which is stratified by site, using patient-level data from a single site and aggregated information from other sites. The ODAC estimator was obtained by maximizing the surrogate likelihood function. We evaluated and compare the performance of ODACH with meta-analysis by extensive numerical studies. Results: The simulation study showed that ODACH provided estimates close to the pooled estimator, which is obtained by directly analyzing patient-level data from all sites via a stratified Cox model. The relative bias was <1% across all scenarios. As a comparison, the meta-analysis estimator, which was obtained by the inverse variance weighted average of the site-specific estimates, had substantial bias when the event rate is <5%, with the relative bias reaching 12% when the event rate is 1%. Conclusions: ODACH is a privacy-preserving and communication-efficient method for analyzing multi-center time-to-event data, which allows the baseline hazard functions as well as the distribution of covariate variables to vary across sites. It provides estimates that is close to the pooled estimator and substantially outperforms the meta-analysis estimator when the event is rare. It is thus extremely suitable for studying rare events with heterogeneous baseline hazards across sites in a distributed manner.


2016 ◽  
Vol 25 (4) ◽  
pp. 1661-1676 ◽  
Author(s):  
Edmund N Njagi ◽  
Geert Molenberghs ◽  
Dimitris Rizopoulos ◽  
Geert Verbeke ◽  
Michael G Kenward ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document