baseline hazard
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 22)

H-INDEX

7
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Marwa Elsaeed Elhefnawy ◽  
Siti Maisharah Sheikh Ghadzi ◽  
Orwa Albitar ◽  
Balamurugan Tangiisuran ◽  
Hadzliana Zainal ◽  
...  

Abstract There are established correlation between risk factors and the recurrence of ischemic stroke (IS), however does the hazard of recurrent IS change although without the influence of established risk factors? This study aimed to quantify the hazard of recurrent IS at different time points after the index IS. This was a population cohort study extracted data of 7697 patients with a history of first IS attack registered with National Neurology Registry of Malaysia. A repeated time to recurrent IS model was developed using NONMEM version 7.5. Three baseline hazard models were fitted into the data. The best model was selected using maximum likelihood estimation, clinical plausibility and visual predictive checks. Three hundred and thirty-three (4.32%) patients developed at least one recurrent IS within the maximum 7.37 years follow-up. In the absence of significant risk factors, the hazard of recurrent IS was predicted to be 0.71 within the first month after the index IS and reduced to 0.022 between the first to third months after the index attack. The hazard of IS recurrence accelerated with the presence of typical risk factors such as hyperlipidaemia (HR, 2.64 [2.10-3.33]), hypertension (HR, 1.97 [1.43-2.72], and ischemic heart disease (HR, 2.21 [1.69-2.87]). In conclusion, the absence of significant risk factors, predicted hazard of recurrent IS was prominent in the first month after the index IS and was non-zero even three months after the index IS or later. Optimal secondary preventive treatment should incorporate the ‘nature risk’ IS recurrence.


Author(s):  
Anthony Joe Turkson ◽  
Timothy Simpson ◽  
John Awuah Addor

A recurrent event remains the outcome variable of interest in many biometric studies. Recurrent events can be explained as events of defined interest that can occur to same person more than once during the study period. This study presents an overview of different pertinent recurrent models for analyzing recurrent events. Aims: To introduce, compare, evaluate and discuss pros and cons of four models in analyzing recurrent events, so as to validate previous findings in respect of the superiority or appropriateness of these models. Study Design:  A comparative studies based on simulation of recurrent event models applied to a tertiary data on cancer studies.  Methodology: Codes in R were implemented for simulating four recurrent event models, namely; The Andersen and Gill model; Prentice, Williams and Peterson models; Wei, Lin and Weissferd; and Cox frailty model. Finally, these models were applied to analyze the first forty subjects from a study of Bladder Cancer Tumors. The data set contained the first four repetitions of the tumor for each patient, and each recurrence time was recorded from the entry time of the patient into the study. An isolated risk interval is defined by each time to an event or censoring. Results: The choice and usage of any of the models lead to different conclusions, but the choice depends on: risk intervals; baseline hazard; risk set; and correlation adjustment or simplistically, type of data and research question. The PWP-GT model could be used if the research question is focused on whether treatment was effective for the  event since the previous event happened. However, if the research question is designed to find out whether treatment was effective for the  event since the start of treatment, then we could use the PWP- TT. The AG model will be adequate if a common baseline hazard could be assumed, but the model lacks the details and versatility of the event-specific models. The WLW model is very suitable for data with diverse events for the same person, which underscores a potentially different baseline hazard for each type. Conclusion: PWP-GT has proven to be the most useful model for analyzing recurrent event data.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jun Ma ◽  
Dominique-Laurent Couturier ◽  
Stephane Heritier ◽  
Ian C. Marschner

Abstract This paper considers the problem of semi-parametric proportional hazards model fitting where observed survival times contain event times and also interval, left and right censoring times. Although this is not a new topic, many existing methods suffer from poor computational performance. In this paper, we adopt a more versatile penalized likelihood method to estimate the baseline hazard and the regression coefficients simultaneously. The baseline hazard is approximated using basis functions such as M-splines. A penalty is introduced to regularize the baseline hazard estimate and also to ease dependence of the estimates on the knots of the basis functions. We propose a Newton–MI (multiplicative iterative) algorithm to fit this model. We also present novel asymptotic properties of our estimates, allowing for the possibility that some parameters of the approximate baseline hazard may lie on the parameter space boundary. Comparisons of our method against other similar approaches are made through an intensive simulation study. Results demonstrate that our method is very stable and encounters virtually no numerical issues. A real data application involving melanoma recurrence is presented and an R package ‘survivalMPL’ implementing the method is available on R CRAN.


2021 ◽  
pp. 096228022110370
Author(s):  
Chew-Seng Chee ◽  
Il Do Ha ◽  
Byungtae Seo ◽  
Youngjo Lee

A consequence of using a parametric frailty model with nonparametric baseline hazard for analyzing clustered time-to-event data is that its regression coefficient estimates could be sensitive to the underlying frailty distribution. Recently, there has been a proposal for specifying both the baseline hazard and the frailty distribution nonparametrically, and estimating the unknown parameters by the maximum penalized likelihood method. Instead, in this paper, we propose the nonparametric maximum likelihood method for a general class of nonparametric frailty models, i.e. models where the frailty distribution is completely unspecified but the baseline hazard can be either parametric or nonparametric. The implementation of the estimation procedure can be based on a combination of either the Broyden–Fletcher–Goldfarb–Shanno or expectation-maximization algorithm and the constrained Newton algorithm with multiple support point inclusion. Simulation studies to investigate the performance of estimation of a regression coefficient by several different model-fitting methods were conducted. The simulation results show that our proposed regression coefficient estimator generally gives a reasonable bias reduction when the number of clusters is increased under various frailty distributions. Our proposed method is also illustrated with two data examples.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kim Jachno ◽  
Stephane Heritier ◽  
Rory Wolfe

Abstract Background Non-proportional hazards are common with time-to-event data but the majority of randomised clinical trials (RCTs) are designed and analysed using approaches which assume the treatment effect follows proportional hazards (PH). Recent advances in oncology treatments have identified two forms of non-PH of particular importance - a time lag until treatment becomes effective, and an early effect of treatment that ceases after a period of time. In sample size calculations for treatment effects on time-to-event outcomes where information is based on the number of events rather than the number of participants, there is crucial importance in correct specification of the baseline hazard rate amongst other considerations. Under PH, the shape of the baseline hazard has no effect on the resultant power and magnitude of treatment effects using standard analytical approaches. However, in a non-PH context the appropriateness of analytical approaches can depend on the shape of the underlying hazard. Methods A simulation study was undertaken to assess the impact of clinically plausible non-constant baseline hazard rates on the power, magnitude and coverage of commonly utilized regression-based measures of treatment effect and tests of survival curve difference for these two forms of non-PH used in RCTs with time-to-event outcomes. Results In the presence of even mild departures from PH, the power, average treatment effect size and coverage were adversely affected. Depending on the nature of the non-proportionality, non-constant event rates could further exacerbate or somewhat ameliorate the losses in power, treatment effect magnitude and coverage observed. No single summary measure of treatment effect was able to adequately describe the full extent of a potentially time-limited treatment benefit whilst maintaining power at nominal levels. Conclusions Our results show the increased importance of considering plausible potentially non-constant event rates when non-proportionality of treatment effects could be anticipated. In planning clinical trials with the potential for non-PH, even modest departures from an assumed constant baseline hazard could appreciably impact the power to detect treatment effects depending on the nature of the non-PH. Comprehensive analysis plans may be required to accommodate the description of time-dependent treatment effects.


Author(s):  
Maryam Rahmati ◽  
Parisa Rezanejad Asl ◽  
Javad Mikaeli ◽  
Hojjat Zeraati ◽  
Aliakbar Rasekhi

2021 ◽  
Vol 55 (1) ◽  
pp. 29-44
Author(s):  
Alphonce Bere ◽  
Godfrey H. Sithuba ◽  
Coster Mabvuu ◽  
Retang Mashabela ◽  
Caston Sigauke ◽  
...  

We present the results of a simulation study performed to compare the accuracy of a lasso-type penalization method and gradient boosting in estimating the baseline hazard function and covariate parameters in discrete survival models. The mean square error results reveal that the lasso-type algorithm performs better in recovering the baseline hazard and covariate parameters. In particular, gradient boosting underestimates the sizes of the parameters and also has a high false positive rate. Similar results are obtained in an application to real-life data.


Author(s):  
Amanda Putri Tiyas Pratiwi ◽  
Sarini Abdullah ◽  
Ida Fithriani

Cox PH model is one of the survival models that is widely used for analyzing time-to-event data. Cox PH model consists of two main components, the baseline hazard consisting of time-dependent component; and the exponential function accomodating explanatory variables. The baseline hazard is not estimated in the Cox PH model, thus not accommodating the need for hazard rate estimation. Therefore, in this paper we discuss the estimation of baseline hazard through piecewise constant hazard using Bayesian method. Gamma distribution is assumed for the piecewise constant baseline hazard, and normal distribution is assumed for the regression coefficient. Sampling from the posterior is conducted using Markov chain Monte Carlo through Gibbs sampling. Echocardiogram data containing 106 observations and 6 explanatory variables were used in analysis. The result showed that the baseline hazard functions were estimated and each of parameters in the model is converged as shown by the trace plot and posterior density plot.    


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Daniel Shriner ◽  
Amy R Bentley ◽  
Jie Zhou ◽  
Kenneth Ekoru ◽  
Ayo P Doumatey ◽  
...  

Given a lifetime risk of ~90% by the ninth decade of life, it is unknown if there are true controls for hypertension in epidemiological and genetic studies. Here, we compared Bayesian logistic and time-to-event approaches to modeling hypertension. The median age at hypertension was approximately a decade earlier in African Americans than in European Americans or Mexican Americans. The probability of being free of hypertension at 85 years of age in African Americans was less than half that in European Americans or Mexican Americans. In all groups, baseline hazard rates increased until nearly 60 years of age and then decreased but did not reach zero. Taken together, modeling of the baseline hazard function of hypertension suggests that there are no true controls and that controls in logistic regression are cases with a late age of onset.


Sign in / Sign up

Export Citation Format

Share Document