recurrent event
Recently Published Documents


TOTAL DOCUMENTS

450
(FIVE YEARS 128)

H-INDEX

27
(FIVE YEARS 3)

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Gilma Hernández-Herrera ◽  
David Moriña ◽  
Albert Navarro

Abstract Background When dealing with recurrent events in observational studies it is common to include subjects who became at risk before follow-up. This phenomenon is known as left censoring, and simply ignoring these prior episodes can lead to biased and inefficient estimates. We aimed to propose a statistical method that performs well in this setting. Methods Our proposal was based on the use of models with specific baseline hazards. In this, the number of prior episodes were imputed when unknown and stratified according to whether the subject had been at risk of presenting the event before t = 0. A frailty term was also used. Two formulations were used for this “Specific Hazard Frailty Model Imputed” based on the “counting process” and “gap time.” Performance was then examined in different scenarios through a comprehensive simulation study. Results The proposed method performed well even when the percentage of subjects at risk before follow-up was very high. Biases were often below 10% and coverages were around 95%, being somewhat conservative. The gap time approach performed better with constant baseline hazards, whereas the counting process performed better with non-constant baseline hazards. Conclusions The use of common baseline methods is not advised when knowledge of prior episodes experienced by a participant is lacking. The approach in this study performed acceptably in most scenarios in which it was evaluated and should be considered an alternative in this context. It has been made freely available to interested researchers as R package miRecSurv.


Author(s):  
Anthony Joe Turkson ◽  
Timothy Simpson ◽  
John Awuah Addor

A recurrent event remains the outcome variable of interest in many biometric studies. Recurrent events can be explained as events of defined interest that can occur to same person more than once during the study period. This study presents an overview of different pertinent recurrent models for analyzing recurrent events. Aims: To introduce, compare, evaluate and discuss pros and cons of four models in analyzing recurrent events, so as to validate previous findings in respect of the superiority or appropriateness of these models. Study Design:  A comparative studies based on simulation of recurrent event models applied to a tertiary data on cancer studies.  Methodology: Codes in R were implemented for simulating four recurrent event models, namely; The Andersen and Gill model; Prentice, Williams and Peterson models; Wei, Lin and Weissferd; and Cox frailty model. Finally, these models were applied to analyze the first forty subjects from a study of Bladder Cancer Tumors. The data set contained the first four repetitions of the tumor for each patient, and each recurrence time was recorded from the entry time of the patient into the study. An isolated risk interval is defined by each time to an event or censoring. Results: The choice and usage of any of the models lead to different conclusions, but the choice depends on: risk intervals; baseline hazard; risk set; and correlation adjustment or simplistically, type of data and research question. The PWP-GT model could be used if the research question is focused on whether treatment was effective for the  event since the previous event happened. However, if the research question is designed to find out whether treatment was effective for the  event since the start of treatment, then we could use the PWP- TT. The AG model will be adequate if a common baseline hazard could be assumed, but the model lacks the details and versatility of the event-specific models. The WLW model is very suitable for data with diverse events for the same person, which underscores a potentially different baseline hazard for each type. Conclusion: PWP-GT has proven to be the most useful model for analyzing recurrent event data.


Neurology ◽  
2021 ◽  
pp. 10.1212/WNL.0000000000013118
Author(s):  
Nils Skajaa ◽  
Kasper Adelborg ◽  
Erzsébet Horváth-Puhó ◽  
Kenneth J Rothman ◽  
Victor W. Henderson ◽  
...  

Background and Objectives:To examine risks of stroke recurrence and mortality after first and recurrent stroke.Methods:Using Danish nationwide health registries, we included patients (age ≥18 years) with first-time ischemic stroke (N = 105,397) or intracerebral hemorrhage (N = 13,350) during 2004–2018. Accounting for the competing risk of death, absolute risks of stroke recurrence were computed separately for each stroke subtype and within strata of age groups, sex, stroke severity, body mass index, smoking, alcohol, the Essen stroke risk score, and atrial fibrillation. Mortality risks were computed after first and recurrent stroke.Results:After adjusting for competing risks, the overall 1-year and 10-year risks of recurrence were 4% and 13% following first-time ischemic stroke and 3% and 12% following first-time intracerebral hemorrhage. For ischemic stroke, the risk of recurrence increased with age, was higher for men and following mild than more severe stroke. The most marked differences were across Essen risk scores, for which recurrence risks increased with increasing scores. For intracerebral hemorrhage, risks were similar for both sexes and did not increase with Essen risk score. For ischemic stroke, the 1-year and 10-year risks of all-cause mortality were 17% and 56% after a first-time stroke and 25% and 70% after a recurrent stroke; corresponding estimates for intracerebral hemorrhage were 37% and 70% after a first-time event and 31% and 75% after a recurrent event.Conclusion:The risk of stroke recurrence was substantial following both subtypes, but risks differed markedly among patient subgroups. The risk of mortality was higher after a recurrent than first-time stroke.


Author(s):  
Julie K. Furberg ◽  
Per K. Andersen ◽  
Sofie Korn ◽  
Morten Overgaard ◽  
Henrik Ravn

Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 3116-3116
Author(s):  
Hung Lam ◽  
Joseph M Becerra ◽  
Charles W Stark ◽  
Yutaka Niihara ◽  
Michael Callaghan

Abstract Introduction The multicenter trial of L-glutamine in sickle cell disease enrolled a total of 230 patients, randomized 2:1, to receive L-glutamine (152 patients) or placebo (78 patients). Following 48 weeks of therapy, patients in the L-glutamine group had significantly fewer pain crises and fewer hospitalizations than those in the placebo group. Two thirds of the patients in both trial groups received concomitant hydroxyurea. The Multicenter Study of Hydroxyurea showed that the treatment group differed from the placebo group in the number of units of blood transfused and in the number of patients receiving transfusions. 1 Since an evaluation of transfusions was not pre-specified in the L-glutamine study, post-hoc analyses were performed on the number of units of red blood cells (RBC) transfused and on the number of transfusions that took place during the study. Methods For the number of units of RBCs transfused through Week 48, analyses of the transfusion dataset were performed using the Poisson regression model with both a robust error variance method and a bootstrap method with baseline reticulocyte count as a covariate. The bias-corrected and accelerated bootstrap method was based on 10,000 draws with replacement from the original data used to compute the 95% confidence interval (CI) for relative risk of the number of units of RBCs transfused and the p-value. For the number of transfusion episodes, the recurrent-event time analysis using the Lin-Wei-Yang-Ying (LWYY) method was employed to model the mean cumulative number of RBC transfusion episodes over the 48-week treatment period with baseline reticulocyte count as a covariate. 2 Results There was a significant difference in the number of units of RBCs transfused in the L-glutamine treatment arm than in the placebo arm; 2.86 units per patient-year in the L-glutamine group vs. 5.38 units per patient-year in the placebo group (Table 1). There was a lower trend in the mean cumulative number of RBC transfusion episodes in the L-glutamine arm than the placebo arm during the 48-week treatment period; 1.702 RBC transfusion episodes per patient-year in the L-glutamine arm compared to 2.659 RBC transfusion episodes per patient-year in the placebo arm (Table 2, Figure 1). Conclusion The post-hoc analyses of the L-glutamine phase 3 clinical study in SCD indicated that, of patients requiring RBC transfusions, those assigned to L-glutamine required approximately 43% fewer units of RBCs compared to those assigned to placebo over the 48-Week period. The recurrent event-time analysis showed a favorable trend in the fewer number of RBC transfusion episodes for those receiving L-glutamine as compared to placebo. These observations are significant when considering the fact that 66% of participants in both arms of this study were on hydroxyurea therapy. REFERENCES 1) Niihara Y, Miller ST, Kanter J, et al. A phase 3 trial of L-glutamine in sickle cell disease. N Eng J Med. 2018;379:226-35. 2) Lin DY, Wei LJ, Yang I, Ying Z. Semiparametric regression for the mean and rate functions of recurrent events. J Royal Statistical Society Series B. 2000;62(4):711-30. Figure 1 Figure 1. Disclosures Becerra: Emmaus Medical, Inc: Current Employment. Stark: Emmaus Medical, Inc: Current Employment. Niihara: Emmaus Lifesciences, Inc.: Current Employment. Callaghan: Agios Pharmaceuticals: Current Employment; BioMarin: Consultancy; Chiesi: Consultancy; Forma: Consultancy; Global Blood Therapeutics: Consultancy, Speakers Bureau; Hema Biologics: Consultancy; Kedrion: Consultancy; Pfizer: Consultancy; Roche/Genentech: Consultancy, Speakers Bureau; Sanofi: Consultancy; Spark: Consultancy; Takeda: Consultancy, Speakers Bureau; uniQure: Consultancy.


2021 ◽  
Author(s):  
Shunsuke Oyamada ◽  
Shih-Wei Chiu ◽  
Takuhiro Yamaguchi

Abstract Background: There are currently no methodological studies on the performance of the statistical models for estimating intervention effects based on the time-to-recurrent-event (TTRE) in stepped wedge cluster randomised trial (SWCRT) using an open cohort design. This study aims to address this by evaluating the performance of these statistical models using an open cohort design with the Monte Carlo simulation in various settings and their application using an actual example.Methods: Using Monte Carlo simulations, we evaluated the performance of the existing extended Cox proportional hazard models, i.e., the Andersen-Gill (AG), Prentice-Williams-Peterson Total-Time (PWP-TT), and Prentice-Williams-Peterson Gap-time (PWP-GT) models, using the settings of several event generation models and true intervention effects, with and without stratification by clusters. Unidirectional switching in SWCRT was represented using time-dependent covariates.Results: Using Monte Carlo simulations with the various described settings, the PWP-GT model with stratification by clusters showed the best performance in most settings and reasonable performance in the others. The only situation in which the performance of the PWP-TT model with stratification by clusters was not inferior to that of the PWP-GT model with stratification by clusters was when there was a certain amount of follow-up period, and the timing of the trial entry was random within the trial period, including the follow-up period. The AG model performed well only in a specific setting. By analysing actual examples, it was found that almost all the statistical models suggested that the risk of events during the intervention condition may be somewhat higher than in the control, although the difference was not statistically significant.Conclusions: The PWP-GT model with stratification by clusters had the most reasonable performance when estimating intervention effects based on the TTRE in SWCRT in various settings using an open cohort design.


Sign in / Sign up

Export Citation Format

Share Document