How to analyse seed germination data using statistical time-to-event analysis: non-parametric and semi-parametric methods

2012 ◽  
Vol 22 (2) ◽  
pp. 77-95 ◽  
Author(s):  
James N. McNair ◽  
Anusha Sunkara ◽  
Daniel Frobish

AbstractSeed germination experiments are conducted in a wide variety of biological disciplines. Numerous methods of analysing the resulting data have been proposed, most of which fall into three classes: intuition-based germination indexes, classical non-linear regression analysis and time-to-event analysis (also known as survival analysis, failure-time analysis and reliability analysis). This paper briefly reviews all three of these classes, and argues that time-to-event analysis has important advantages over the other methods but has been underutilized to date. It also reviews in detail the types of time-to-event analysis that are most useful in analysing seed germination data with standard statistical software. These include non-parametric methods (life-table and Kaplan–Meier estimators, and various methods for comparing two or more groups of seeds) and semi-parametric methods (Cox proportional hazards model, which permits inclusion of categorical and quantitative covariates, and fixed and random effects). Each method is illustrated by applying it to a set of real germination data. Sample code for conducting these analyses with two standard statistical programs is also provided in the supplementary material available online (athttp://journals.cambridge.org/). The methods of time-to-event analysis reviewed here can be applied to many other types of biological data, such as seedling emergence times, flowering times, development times for eggs or embryos, and organism lifetimes.

Plants ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 617
Author(s):  
Alessandro Romano ◽  
Piergiorgio Stevanato

Germination data are analyzed by several methods, which can be mainly classified as germination indexes and traditional regression techniques to fit non-linear parametric functions to the temporal sequence of cumulative germination. However, due to the nature of germination data, often different from other biological data, the abovementioned methods may present some limits, especially when ungerminated seeds are present at the end of an experiment. A class of methods that could allow addressing these issues is represented by the so-called “time-to-event analysis”, better known in other scientific fields as “survival analysis” or “reliability analysis”. There is relatively little literature about the application of these methods to germination data, and some reviews dealt only with parts of the possible approaches such as either non-parametric and semi-parametric or parametric ones. The present study aims to give a contribution to the knowledge about the reliability of these methods by assessing all the main approaches to the same germination data provided by sugar beet (Beta vulgaris L.) seeds cohorts. The results obtained confirmed that although the different approaches present advantages and disadvantages, they could generally represent a valuable tool to analyze germination data providing parameters whose usefulness depends on the purpose of the research.


2020 ◽  
pp. 096228022095359
Author(s):  
Oliver Kuss ◽  
Annika Hoyer

Regression models for continuous, binary, nominal, and ordinal outcomes almost completely rely on parametric models, whereas time-to-event outcomes are mainly analyzed by Cox’s Proportional Hazards model, an essentially non-parametric method. This is done despite a long list of disadvantages that have been reported for the hazard ratio, and also for the odds ratio, another effect measure sometimes used for time-to-event modelling. In this paper, we propose a parametric proportional risk model for time-to-event outcomes in a two-group situation. Modelling explicitly a risk instead of a hazard or an odds solves the current interpretational and technical problems of the latter two effect measures. The model further allows for computing absolute effect measures like risk differences or numbers needed to treat. As an additional benefit, results from the model can also be communicated on the original time scale, as an accelerated or a prolongated failure time thus facilitating interpretation for a non-technical audience. Parameter estimation by maximum likelihood, while properly accounting for censoring, is straightforward and can be implemented in each statistical package that allows coding and maximizing a univariate likelihood function. We illustrate the model with an example from a randomized controlled trial on efficacy of a new glucose-lowering drug for the treatment of type 2 diabetes mellitus and give the results of a small simulation study.


2005 ◽  
Vol 84 (1) ◽  
pp. 54-58 ◽  
Author(s):  
S.K. Chuang ◽  
T. Cai ◽  
C.W. Douglass ◽  
L.J. Wei ◽  
T.B. Dodson

Because dental implant failure patterns tend to cluster within subjects, we hypothesized that the risk of implant failure varies among subjects. To address this hypothesis in the setting of clustered, correlated observations, we considered a retrospective cohort study where we identified a cohort having at least one implant placed. The cohort was composed of 677 patients who had 2349 implants placed. To test the hypothesis, we applied an innovative analytic method, i.e., the Cox proportional hazards model with frailty, to account for correlation within subjects and the heterogeneity of risk, i.e., frailty, among subjects for implant failure. Consistent with our hypothesis, risk for implant failure among subjects varied to a statistically significantly degree (p = 0.041). In addition, the risk for implant failure is significantly associated with several factors, including tobacco use, implant length, immediate implant placement, staging, well size, and proximity of adjacent implants or teeth.


2020 ◽  
Author(s):  
Gabriele Del Castillo ◽  
Ambra Castrofino ◽  
Francesca Grosso ◽  
Giuseppe Marano ◽  
Patrizia Boracchi ◽  
...  

Abstract Objectives. To assess the time span from positive to negative SARS-CoV-2 RNA detection by RT-PCR, and to evaluate the reliability of the test-based criteria as the required condition for the reintroduction of the asymptomatic SARS-CoV-2 positive patient in the community. Methods. We used information concerning negativization and the respective times. Cumulative probabilities of negativization during the follow-up were evaluated by through Crude Cumulative Incidences (CCIs). Non-parametric estimates of CCIs and respective 95% C.I.s were obtained.Results. We report only the results for 52,186 individuals. 33486 subjects resulted negative or potentially negative with a CCI of 75.2% at 70 days from the first swab (95% CI: 74.8% to 75.7%). 11,000 subjects deceased before 14/05/2020 without diagnosis of negative status (CCI 21.9%; 95% CI: 21.5% to 22.3%) at 56 days from the first swab (maximum observed time to death).Conclusions. SARS-CoV-2 positivity is a condition that frequently lasts more than 30 days. Since isolation based only on positivity status could be excessive, more solid studies are required to determine a single internationally accepted policy regarding the dismission of quarantine and isolation.


2019 ◽  
Vol 63 (7) ◽  
Author(s):  
Mohammad H. Al-Shaer ◽  
Wael A. Alghamdi ◽  
Abdullah Alsultan ◽  
Guohua An ◽  
Shahriar Ahmed ◽  
...  

ABSTRACT Fluoroquinolones are group A drugs in tuberculosis guidelines. We aim to compare the culture conversion between new-generation (levofloxacin and moxifloxacin) and old-generation (ciprofloxacin and ofloxacin) fluoroquinolones, develop pharmacokinetic models, and calculate target attainment for levofloxacin and moxifloxacin. We included three U.S. tuberculosis centers. Patients admitted between 1984 and 2015, infected with drug-resistant tuberculosis, and who had received fluoroquinolones for ≥28 days were included. Demographics, sputum cultures and susceptibility, treatment regimens, and serum concentrations were collected. A time-to-event analysis was conducted, and Cox proportional hazards model was used to compare the time to culture conversion. Using additional data from ongoing studies, pharmacokinetic modelling and Monte Carlo simulations were performed to assess target attainment for different doses. Overall, 124 patients received fluoroquinolones. The median age was 40 years, and the median weight was 60 kg. Fifty-six patients (45%) received old-generation fluoroquinolones. New-generation fluoroquinolones showed a faster time to culture conversion (median 16 versus 40 weeks, P = 0.012). After adjusting for isoniazid and clofazimine treatment, patients treated with new-generation fluoroquinolones were more likely to have culture conversion (adjusted hazards ratio, 2.16 [95% confidence interval, 1.28 to 3.64]). We included 178 patients in the pharmacokinetic models. Levofloxacin and moxifloxacin were best described by a one-compartment model with first-order absorption and elimination. At least 1,500 to 1,750 mg levofloxacin and 800 mg moxifloxacin may be needed for maximum kill at the current epidemiologic cutoff values. In summary, new-generation fluoroquinolones showed faster time to culture conversion compared to the old generation. For optimal target attainment at the current MIC values, higher doses of levofloxacin and moxifloxacin may be needed.


2016 ◽  
Vol 27 (3) ◽  
pp. 955-965 ◽  
Author(s):  
Xiaonan Xue ◽  
Xianhong Xie ◽  
Howard D Strickler

The commonly used statistical model for studying time to event data, the Cox proportional hazards model, is limited by the assumption of a constant hazard ratio over time (i.e., proportionality), and the fact that it models the hazard rate rather than the survival time directly. The censored quantile regression model, defined on the quantiles of time to event, provides an alternative that is more flexible and interpretable. However, the censored quantile regression model has not been widely adopted in clinical research, due to the complexity involved in interpreting its results properly and consequently the difficulty to appreciate its advantages over the Cox proportional hazards model, as well as the absence of adequate validation procedure. In this paper, we addressed these limitations by (1) using both simulated examples and data from National Wilms’ Tumor clinical trials to illustrate proper interpretation of the censored quantile regression model and the differences and the advantages of the model compared to the Cox proportional hazards model; and (2) developing a validation procedure for the predictive censored quantile regression model. The performance of this procedure was examined using simulation studies. Overall, we recommend the use of censored quantile regression model, which permits a more sensitive analysis of time to event data together with the Cox proportional hazards model.


2016 ◽  
Vol 27 (4) ◽  
pp. 1258-1270 ◽  
Author(s):  
Huirong Zhu ◽  
Stacia M DeSantis ◽  
Sheng Luo

Longitudinal zero-inflated count data are encountered frequently in substance-use research when assessing the effects of covariates and risk factors on outcomes. Often, both the time to a terminal event such as death or dropout and repeated measure count responses are collected for each subject. In this setting, the longitudinal counts are censored by the terminal event, and the time to the terminal event may depend on the longitudinal outcomes. In the study described herein, we expand the class of joint models for longitudinal and survival data to accommodate zero-inflated counts and time-to-event data by using a Cox proportional hazards model with piecewise constant baseline hazard. We use a Bayesian framework via Markov chain Monte Carlo simulations implemented in the BUGS programming language. Via an extensive simulation study, we apply the joint model and obtain estimates that are more accurate than those of the corresponding independence model. We apply the proposed method to an alpha-tocopherol, beta-carotene lung cancer prevention study.


Author(s):  
Milind A. Phadnis

Aim: To propose an updated algorithm with an extra step added to the Newton-type algorithm used in robust rank based non-parametric regression for minimizing the dispersion function associated with Wilcoxon scores in order to account for the effect of covariates. Methodology: The proposed accelerated failure time approach is aimed at incorporating right random censoring in survival data sets for low to moderate levels of censoring. The existing Newton algorithm is modified to account for the effect of one or more covariates. This is done by first applying Mantel scores to residuals obtained from a regression model, and second by minimizing the dispersion function of these scored residuals. Diagnostic check of the model fit is performed by observing the distribution of the residuals and suitable Bent scores are considered in the case of skewed residuals. To demonstrate the efficacy of this method, a simulation study is conducted to compare the power of this method under three different scenarios: non-proportional hazard, proportional and constant hazard, and proportional but non-constant hazard. Results: In most situations, this method yielded reasonable estimates of power for detecting an association of the covariate with the response as compared to popular parametric and semi-parametric approaches. The estimates of the regression coefficient obtained from this method were evaluated and were found to have low bias, low mean square error, and adequate coverage. In a real-life example pertaining to pancreatic cancer study, the proposed method performed admirably well and provided a more realistic interpretation about the effect of covariates (age and Karnofsky score) compared to a standard parametric (lognormal) model. Conclusion: In situations where there is no clear best parametric fit for time-to-event data with moderate level of censoring, the proposed method provides a robust alternative to obtain regression coefficients (both adjusted and unadjusted) with a performance comparable to that of a proportional hazards model.


Sign in / Sign up

Export Citation Format

Share Document