partial likelihood
Recently Published Documents


TOTAL DOCUMENTS

172
(FIVE YEARS 37)

H-INDEX

22
(FIVE YEARS 4)

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Katrin Madjar ◽  
Jörg Rahnenführer

Abstract Background An important task in clinical medicine is the construction of risk prediction models for specific subgroups of patients based on high-dimensional molecular measurements such as gene expression data. Major objectives in modeling high-dimensional data are good prediction performance and feature selection to find a subset of predictors that are truly associated with a clinical outcome such as a time-to-event endpoint. In clinical practice, this task is challenging since patient cohorts are typically small and can be heterogeneous with regard to their relationship between predictors and outcome. When data of several subgroups of patients with the same or similar disease are available, it is tempting to combine them to increase sample size, such as in multicenter studies. However, heterogeneity between subgroups can lead to biased results and subgroup-specific effects may remain undetected. Methods For this situation, we propose a penalized Cox regression model with a weighted version of the Cox partial likelihood that includes patients of all subgroups but assigns them individual weights based on their subgroup affiliation. The weights are estimated from the data such that patients who are likely to belong to the subgroup of interest obtain higher weights in the subgroup-specific model. Results Our proposed approach is evaluated through simulations and application to real lung cancer cohorts, and compared to existing approaches. Simulation results demonstrate that our proposed model is superior to standard approaches in terms of prediction performance and variable selection accuracy when the sample size is small. Conclusions The results suggest that sharing information between subgroups by incorporating appropriate weights into the likelihood can increase power to identify the prognostic covariates and improve risk prediction.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Leili Tapak ◽  
Michael R. Kosorok ◽  
Majid Sadeghifar ◽  
Omid Hamidi ◽  
Saeid Afshar ◽  
...  

Variable selection and penalized regression models in high-dimension settings have become an increasingly important topic in many disciplines. For instance, omics data are generated in biomedical researches that may be associated with survival of patients and suggest insights into disease dynamics to identify patients with worse prognosis and to improve the therapy. Analysis of high-dimensional time-to-event data in the presence of competing risks requires special modeling techniques. So far, some attempts have been made to variable selection in low- and high-dimension competing risk setting using partial likelihood-based procedures. In this paper, a weighted likelihood-based penalized approach is extended for direct variable selection under the subdistribution hazards model for high-dimensional competing risk data. The proposed method which considers a larger class of semiparametric regression models for the subdistribution allows for taking into account time-varying effects and is of particular importance, because the proportional hazards assumption may not be valid in general, especially in the high-dimension setting. Also, this model relaxes from the constraint of the ability to simultaneously model multiple cumulative incidence functions using the Fine and Gray approach. The performance/effectiveness of several penalties including minimax concave penalty (MCP); adaptive LASSO and smoothly clipped absolute deviation (SCAD) as well as their L2 counterparts were investigated through simulation studies in terms of sensitivity/specificity. The results revealed that sensitivity of all penalties were comparable, but the MCP and MCP-L2 penalties outperformed the other methods in term of selecting less noninformative variables. The practical use of the model was investigated through the analysis of genomic competing risk data obtained from patients with bladder cancer and six genes of CDC20, NCF2, SMARCAD1, RTN4, ETFDH, and SON were identified using all the methods and were significantly correlated with the subdistribution.


Thorax ◽  
2021 ◽  
pp. thoraxjnl-2020-215632
Author(s):  
Yun-Jiu Cheng ◽  
Zhen-Guang Chen ◽  
Feng-Juan Yao ◽  
Li-Juan Liu ◽  
Ming Zhang ◽  
...  

BackgroundGrowing evidence suggests that compromised lung health may be linked to cardiovascular disease. However, little is known about its association with sudden cardiac death (SCD).ObjectivesWe aimed to assess the link between impaired lung function, airflow obstruction and risk of SCD by race and gender in four US communities.MethodsA total of 14 708 Atherosclerosis Risk in Communities (ARIC) study participants who underwent spirometry and were asked about lung health (1987–1989) were followed. The main outcome was physician-adjudicated SCD. Fine-Gray proportional subdistribution hazard models with Firth’s penalised partial likelihood correction were used to estimate the HRs.ResultsOver a median follow-up of 25.4 years, 706 (4.8%) subjects experienced SCD. The incidence of SCD was inversely associated with FEV1 in each of the four race and gender groups and across all smoking status categories. After adjusting for multiple measured confounders, HRs of SCD comparing the lowest with the highest quintile of FEV1 were 2.62 (95% CI 1.62 to 4.26) for white males, 1.80 (95% CI 1.03 to 3.15) for white females, 2.07 (95% CI 1.05 to 4.11) for black males and 2.62 (95% CI 1.21 to 5.65) for black females. The above associations were consistently observed among the never smokers. Moderate to very severe airflow obstruction was associated with increased risk of SCD. Addition of FEV1 significantly improved the predictive power for SCD.ConclusionsImpaired lung function and airflow obstruction were associated with increased risk of SCD in general population. Additional research to elucidate the underlying mechanisms is warranted.


2021 ◽  
Author(s):  
Pablo Gonzalez Ginestet ◽  
Philippe Weitz ◽  
Mattias Rantalainen ◽  
Erin E Gabriel

Abstract Background: Prognostic models are of high relevance in many medical application domains. However, many common machine learning methods have not been developed for direct applicability to right-censored outcome data. Recently there have been adaptations of these methods to make predictions based on only structured data (such as clinical data). Pseudo-observations has been suggested as a data pre-processing step to address right-censoring in deep neural network. There is a theoretical backing for the use of pseudo-observations to replace the right-censored response outcome, and this allows for algorithms and loss functions designed for continuous, non-censored data to be used. Medical images have been used to predict time-to-event outcomes applying deep convolutional neural network (CNN) methods using a Cox partial likelihood loss function under the assumption of proportional hazard. We propose a method to predict the cumulative incidence from images and structured clinical data by integrating (or combining) pseudo-observations and convolutional neural networks.Results: The performance of the proposed method is assessed in simulation studies and a real data example in breast cancer from The Cancer Genome Atlas (TCGA). The results are compared to the existing convolutional neural network with Cox loss. Our simulation results show that our proposed method performs similar to or even outperforms the comparator, particularly in settings where both the dependent censoring and the survival time do not follow proportional hazards in large sample sizes. The results found in the application in the TCGA data are consistent with the results found in the simulation for small sample settings, where both methods perform similarly. Conclusions: The proposed method facilitates the application of deep CNN methods to time-to-event data and allows for the use of simple and easy to modify loss functions thus contributing to modern image-based precision medicine.


2021 ◽  
Vol 116 ◽  
pp. 102077
Author(s):  
Christopher M. Wilson ◽  
Kaiqiao Li ◽  
Qiang Sun ◽  
Pei Fen Kuan ◽  
Xuefeng Wang

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Kassim Tawiah ◽  
Wahab Abdul Iddrisu ◽  
Killian Asampana Asosega

Discrete count time series data with an excessive number of zeros have warranted the development of zero-inflated time series models to incorporate the inflation of zeros and the overdispersion that comes with it. In this paper, we investigated the characteristics of the trend of daily count of COVID-19 deaths in Ghana using zero-inflated models. We envisaged that the trend of COVID-19 deaths per day in Ghana portrays a general increase from the onset of the pandemic in the country to about day 160 after which there is a general decrease onward. We fitted a zero-inflated Poisson autoregressive model and zero-inflated negative binomial autoregressive model to the data in the partial-likelihood framework. The zero-inflated negative binomial autoregressive model outperformed the zero-inflated Poisson autoregressive model. On the other hand, the dynamic zero-inflated Poisson autoregressive model performed better than the dynamic negative binomial autoregressive model. The predicted new death based on the zero-inflated negative binomial autoregressive model indicated that Ghana’s COVID-19 death per day will rise sharply few days after 30th November 2020 and drastically fall just as in the observed data.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Todd A. MacKenzie ◽  
Pablo Martinez-Camblor ◽  
A. James O’Malley

Abstract Background Estimation that employs instrumental variables (IV) can reduce or eliminate bias due to confounding. In observational studies, instruments result from natural experiments such as the effect of clinician preference or geographic distance on treatment selection. In randomized studies the randomization indicator is typically a valid instrument, especially if the study is blinded, e.g. no placebo effect. Estimation via instruments is a highly developed field for linear models but the use of instruments in time-to-event analysis is far from established. Various IV-based estimators of the hazard ratio (HR) from Cox’s regression models have been proposed. Methods We extend IV based estimation of Cox’s model beyond proportionality of hazards, and address estimation of a log-linear time dependent hazard ratio and a piecewise constant HR. We estimate the marginal time-dependent hazard ratio unlike other approaches that estimate the hazard ratio conditional on the omitted covariates. We use estimating equations motivated by Martingale representations that resemble the partial likelihood score statistic. We conducted simulations that include the use of copulas to generate potential times-to-event that have a given marginal structural time dependent hazard ratio but are dependent on omitted covariates. We compare our approach to the partial likelihood estimator, and two other IV based approaches. We apply it to estimation of the time dependent hazard ratio for two vascular interventions. Results The method performs well in simulations of a stepwise time-dependent hazard ratio, but illustrates some bias that increases as the hazard ratio moves away from unity (the value that typically underlies the null hypothesis). It compares well to other approaches when the hazard ratio is stepwise constant. It also performs well for estimation of a log-linear hazard ratio where no other instrumental variable approaches exist. Conclusion The estimating equations we propose for estimating a time-dependent hazard ratio using an IV perform well in simulations. We encourage the use of our procedure for time-dependent hazard ratio estimation when unmeasured confounding is a concern and a suitable instrumental variable exists.


Sign in / Sign up

Export Citation Format

Share Document