scholarly journals Bayesian Estimation in the Proportional Hazards Model of Random Censorship under Asymmetric Loss Functions

2012 ◽  
Vol 11 (0) ◽  
pp. 72-88 ◽  
Author(s):  
Muhammad Yameen Danish ◽  
Muhammad Aslam
2012 ◽  
Vol 04 (03) ◽  
pp. 1250021 ◽  
Author(s):  
MUHAMMAD YAMEEN DANISH ◽  
MUHAMMAD ASLAM

This paper deals with Bayesian estimation of parameters in the proportional hazards model of random censorship for the Weibull distribution under different loss functions. We consider both the informative and noninformative priors on the model parameters to obtain the Bayes estimates using Gibbs sampling scheme. Maximum likelihood estimates are also obtained for comparison purposes. A simulation study is carried out to observe the behavior of the proposed estimators for different sample sizes and for different censoring parameters. One real data analysis is performed for illustration.


2021 ◽  
Author(s):  
Fabrizio Kuruc ◽  
Harald Binder ◽  
Moritz Hess

AbstractDeep neural networks are now frequently employed to predict survival conditional on omics-type biomarkers, e.g. by employing the partial likelihood of Cox proportional hazards model as loss function. Due to the generally limited number of observations in clinical studies, combining different data-sets has been proposed to improve learning of network parameters. However, if baseline hazards differ between the studies, the assumptions of Cox proportional hazards model are violated. Based on high dimensional transcriptome profiles from different tumor entities, we demonstrate how using a stratified partial likelihood as loss function allows for accounting for the different baseline hazards in a deep learning framework. Additionally, we compare the partial likelihood with the ranking loss, which is frequently employed as loss function in machine learning approaches due to its seemingly simplicity. Using RNA-seq data from the Cancer Genome Atlas (TCGA) we show that use of stratified loss functions leads to an overall better discriminatory power and lower prediction error compared to their nonstratified counterparts. We investigate which genes are identified to have the greatest marginal impact on prediction of survival when using different loss functions. We find that while similar genes are identified, in particular known prognostic genes receive higher importance from stratified loss functions. Taken together, pooling data from different sources for improved parameter learning of deep neural networks benefits largely from employing stratified loss functions that consider potentially varying baseline hazards. For easy application, we provide PyTorch code for stratified loss functions and an explanatory Jupyter notebook in a GitHub repository.


Sign in / Sign up

Export Citation Format

Share Document