time to event data
Recently Published Documents


TOTAL DOCUMENTS

446
(FIVE YEARS 148)

H-INDEX

26
(FIVE YEARS 4)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Peilin Meng ◽  
Jing Ye ◽  
Xiaomeng Chu ◽  
Bolun Cheng ◽  
Shiqiang Cheng ◽  
...  

AbstractIt is well-accepted that both environment and genetic factors contribute to the development of mental disorders (MD). However, few genetic studies used time-to-event data analysis to identify the susceptibility genetic variants associated with MD and explore the role of environment factors in these associations. In order to detect novel genetic loci associated with MD based on the time-to-event data and identify the role of environmental factors in them, this study recruited 376,806 participants from the UK Biobank cohort. The MD outcomes (including overall MD status, anxiety, depression and substance use disorders (SUD)) were defined based on in-patient hospital, self-reported and death registry data collected in the UK Biobank. SPACOX approach was used to identify the susceptibility loci for MD using the time-to-event data of the UK Biobank cohort. And then we estimated the associations between identified candidate loci, fourteen environment factors and MD through a phenome-wide association study and mediation analysis. SPACOX identified multiple candidate loci for overall MD status, depression and SUD, such as rs139813674 (P value = 8.39 × 10–9, ZNF684) for overall MD status, rs7231178 (DCC, P value = 2.11 × 10–9) for depression, and rs10228494 (FOXP2, P value = 6.58 × 10–10) for SUD. Multiple environment factors could influence the associations between identified loci and MD, such as confide in others and felt hated. Our study identified novel candidate loci for MD, highlighting the strength of time-to-event data based genetic association studies. We also observed that multiple environment factors could influence the association between susceptibility loci and MD.


2022 ◽  
Author(s):  
Benjamin Hartley ◽  
Thomas Drury ◽  
Sally Lettis ◽  
Bhabita Mayer ◽  
Oliver N. Keene ◽  
...  

2021 ◽  
pp. 096228022110239
Author(s):  
Shaun R Seaman ◽  
Anne Presanis ◽  
Christopher Jackson

Time-to-event data are right-truncated if only individuals who have experienced the event by a certain time can be included in the sample. For example, we may be interested in estimating the distribution of time from onset of disease symptoms to death and only have data on individuals who have died. This may be the case, for example, at the beginning of an epidemic. Right truncation causes the distribution of times to event in the sample to be biased towards shorter times compared to the population distribution, and appropriate statistical methods should be used to account for this bias. This article is a review of such methods, particularly in the context of an infectious disease epidemic, like COVID-19. We consider methods for estimating the marginal time-to-event distribution, and compare their efficiencies. (Non-)identifiability of the distribution is an important issue with right-truncated data, particularly at the beginning of an epidemic, and this is discussed in detail. We also review methods for estimating the effects of covariates on the time to event. An illustration of the application of many of these methods is provided, using data on individuals who had died with coronavirus disease by 5 April 2020.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Jaclyn M. Beca ◽  
Kelvin K. W. Chan ◽  
David M. J. Naimark ◽  
Petros Pechlivanoglou

Abstract Introduction Extrapolation of time-to-event data from clinical trials is commonly used in decision models for health technology assessment (HTA). The objective of this study was to assess performance of standard parametric survival analysis techniques for extrapolation of time-to-event data for a single event from clinical trials with limited data due to small samples or short follow-up. Methods Simulated populations with 50,000 individuals were generated with an exponential hazard rate for the event of interest. A scenario consisted of 5000 repetitions with six sample size groups (30–500 patients) artificially censored after every 10% of events observed. Goodness-of-fit statistics (AIC, BIC) were used to determine the best-fitting among standard parametric distributions (exponential, Weibull, log-normal, log-logistic, generalized gamma, Gompertz). Median survival, one-year survival probability, time horizon (1% survival time, or 99th percentile of survival distribution) and restricted mean survival time (RMST) were compared to population values to assess coverage and error (e.g., mean absolute percentage error). Results The true exponential distribution was correctly identified using goodness-of-fit according to BIC more frequently compared to AIC (average 92% vs 68%). Under-coverage and large errors were observed for all outcomes when distributions were specified by AIC and for time horizon and RMST with BIC. Error in point estimates were found to be strongly associated with sample size and completeness of follow-up. Small samples produced larger average error, even with complete follow-up, than large samples with short follow-up. Correctly specifying the event distribution reduced magnitude of error in larger samples but not in smaller samples. Conclusions Limited clinical data from small samples, or short follow-up of large samples, produce large error in estimates relevant to HTA regardless of whether the correct distribution is specified. The associated uncertainty in estimated parameters may not capture the true population values. Decision models that base lifetime time horizon on the model’s extrapolated output are not likely to reliably estimate mean survival or its uncertainty. For data with an exponential event distribution, BIC more reliably identified the true distribution than AIC. These findings have important implications for health decision modelling and HTA of novel therapies seeking approval with limited evidence.


Circulation ◽  
2021 ◽  
Vol 144 (Suppl_2) ◽  
Author(s):  
Eirik Unneland ◽  
Anders Norvik ◽  
Shaun McGovern ◽  
David Buckler ◽  
Unai Irusta ◽  
...  

Background: Pulseless Electrical Activity (PEA) is common during in-hospital cardiac arrest. We investigated the development of four types of PEA: PEA as presenting clinical state (primary) and PEA secondary to transient return of spontaneous circulation (ROSC), ventricular fibrillation/tachycardia (VF/VT), or asystole (ASY). Methods: We analyzed 660 episodes of cardiac arrest at one Norwegian and three U.S. hospitals. ECG, chest compressions and ventilations were recorded by defibrillators during CPR. Clinical states were annotated using a graphical application. We quantified the transition intensities from PEA to ROSC (i.e. the immediate probability of a transition), and the observed half-lives for the four types of PEA (i.e. how quickly PEA develops into another clinical state), using Aalen’s additive model for time-to-event data. Results: The transition intensities to ROSC from primary PEA (n=386) and secondary PEA after ASY (n=226) were about 0.08 per minute, peaking at 6 and 9 min, respectively (figure, left). Thus, an average patient in these types of PEA has about 8% chance to achieve ROSC in one minute. Much higher transition intensities to ROSC of about 0.20 per min were observed for secondary PEA after transient ROSC (n=209) or VF/VT (n=225), peaking at 10 and 5 min, respectively. Half-live times for the four types of PEA (figure, right) were 8.5 min, 6.8 min, 4.6 min and 1.6 min, for primary PEA, and secondary PEA after ASY, transient ROSC and VF/VT, respectively. Discussion: The observed clinical development of PEA in terms of intensity, peak intensity and half-lives during resuscitation differs substantially between the four types of PEA. The chance of obtaining ROSC is considerably lower in primary PEA or PEA after ASY, compared to PEA following transient ROSC or after VF/VT. This may increase understanding of the nature of PEA and the process leading to ROSC; and allow for simple prognostic assessments during a resuscitation attempt.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Theodore C. Hirst ◽  
Emily S. Sena ◽  
Malcolm R. Macleod

Abstract Background Time-to-event data is frequently reported in both clinical and preclinical research spheres. Systematic review and meta-analysis is a tool that can help to identify pitfalls in preclinical research conduct and reporting that can help to improve translational efficacy. However, pooling of studies using hazard ratios (HRs) is cumbersome especially in preclinical meta-analyses including large numbers of small studies. Median survival is a much simpler metric although because of some limitations, which may not apply to preclinical data, it is generally not used in survival meta-analysis. We aimed to appraise its performance when compared with hazard ratio-based meta-analysis when pooling large numbers of small, imprecise studies. Methods We simulated a survival dataset with features representative of a typical preclinical survival meta-analysis, including with influence of a treatment and a number of covariates. We calculated individual patient data-based hazard ratios and median survival ratios (MSRs), comparing the summary statistics directly and their performance at random-effects meta-analysis. Finally, we compared their sensitivity to detect associations between treatment and influential covariates at meta-regression. Results There was an imperfect correlation between MSR and HR, although the opposing direction of treatment effects between summary statistics appeared not to be a major issue. Precision was more conservative for HR than MSR, meaning that estimates of heterogeneity were lower. There was a slight sensitivity advantage for MSR at meta-analysis and meta-regression, although power was low in all circumstances. Conclusions We believe we have validated MSR as a summary statistic for use in a meta-analysis of small, imprecise experimental survival studies—helping to increase confidence and efficiency in future reviews in this area. While assessment of study precision and therefore weighting is less reliable, MSR appears to perform favourably during meta-analysis. Sensitivity of meta-regression was low for this set of parameters, so pooling of treatments to increase sample size may be required to ensure confidence in preclinical survival meta-regressions.


2021 ◽  
pp. 207-228
Author(s):  
Gail Williams ◽  
Robert S. Ware

This chapter provides an introduction to statistical methods with illustrative examples from public health and epidemiological research. The chapter begins by distinguishing between a study sample and a target population. It goes on to outline different methods of sampling, including probability and non-probability sampling methods. In the following section, the distributions of epidemiological variables are considered, leading on to discussion of probability distributions and statistical inference. Methods for comparing data from two or more groups are then outlined, including methods for continuous and categorical variables. Analysis of time-to-event data to evaluate survival times is then outlined. The final section of the chapter discusses the application of multivariable models to epidemiological data, including extensions of basic models to more complex data distributions. The chapter concludes by cautioning that increasing ease of access to sophisticated statistical methods may increase the risk of erroneous application. There is little substitute for consulting a qualified statistician, particularly with complex designs.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259121
Author(s):  
Matthieu Faron ◽  
Pierre Blanchard ◽  
Laureen Ribassin-Majed ◽  
Jean-Pierre Pignon ◽  
Stefan Michiels ◽  
...  

Introduction Individual patient data (IPD) present particular advantages in network meta-analysis (NMA) because interactions may lead an aggregated data (AD)-based model to wrong a treatment effect (TE) estimation. However, fewer works have been conducted for IPD with time-to-event contrary to binary outcomes. We aimed to develop a general frequentist one-step model for evaluating TE in the presence of interaction in a three-node NMA for time-to-event data. Methods One-step, frequentist, IPD-based Cox and Poisson generalized linear mixed models were proposed. We simulated a three-node network with or without a closed loop with (1) no interaction, (2) covariate-treatment interaction, and (3) covariate distribution heterogeneity and covariate-treatment interaction. These models were applied to the NMA (Meta-analyses of Chemotherapy in Head and Neck Cancer [MACH-NC] and Radiotherapy in Carcinomas of Head and Neck [MARCH]), which compared the addition of chemotherapy or modified radiotherapy (mRT) to loco-regional treatment with two direct comparisons. AD-based (contrast and meta-regression) models were used as reference. Results In the simulated study, no IPD models failed to converge. IPD-based models performed well in all scenarios and configurations with small bias. There were few variations across different scenarios. In contrast, AD-based models performed well when there were no interactions, but demonstrated some bias when interaction existed and a larger one when the modifier was not distributed evenly. While meta-regression performed better than contrast-based only, it demonstrated a large variability in estimated TE. In the real data example, Cox and Poisson IPD-based models gave similar estimations of the model parameters. Interaction decomposition permitted by IPD explained the ecological bias observed in the meta-regression. Conclusion The proposed general one-step frequentist Cox and Poisson models had small bias in the evaluation of a three-node network with interactions. They performed as well or better than AD-based models and should also be undertaken whenever possible.


Sign in / Sign up

Export Citation Format

Share Document