scholarly journals Transformation model estimation of survival under dependent truncation and independent censoring

2018 ◽  
Vol 28 (12) ◽  
pp. 3785-3798 ◽  
Author(s):  
Sy Han Chiou ◽  
Matthew D Austin ◽  
Jing Qian ◽  
Rebecca A Betensky

Truncation is a mechanism that permits observation of selected subjects from a source population; subjects are excluded if their event times are not contained within subject-specific intervals. Standard survival analysis methods for estimation of the distribution of the event time require quasi-independence of failure and truncation. When quasi-independence does not hold, alternative estimation procedures are required; currently, there is a copula model approach that makes strong modeling assumptions, and a transformation model approach that does not allow for right censoring. We extend the transformation model approach to accommodate right censoring. We propose a regression diagnostic for assessment of model fit. We evaluate the proposed transformation model in simulations and apply it to the National Alzheimer’s Coordinating Centers autopsy cohort study, and an AIDS incubation study. Our methods are publicly available in an R package, tranSurv.

2020 ◽  
Vol 29 (10) ◽  
pp. 2830-2850
Author(s):  
Malka Gorfine ◽  
Matan Schlesinger ◽  
Li Hsu

This work presents novel and powerful tests for comparing non-proportional hazard functions, based on sample–space partitions. Right censoring introduces two major difficulties, which make the existing sample–space partition tests for uncensored data non-applicable: (i) the actual event times of censored observations are unknown and (ii) the standard permutation procedure is invalid in case the censoring distributions of the groups are unequal. We overcome these two obstacles, introduce invariant tests, and prove their consistency. Extensive simulations reveal that under non-proportional alternatives, the proposed tests are often of higher power compared with existing popular tests for non-proportional hazards. Efficient implementation of our tests is available in the R package KONPsurv, which can be freely downloaded from CRAN.


2021 ◽  
pp. 096228022110370
Author(s):  
Brice Ozenne ◽  
Esben Budtz-Jørgensen ◽  
Julien Péron

The benefit–risk balance is a critical information when evaluating a new treatment. The Net Benefit has been proposed as a metric for the benefit–risk assessment, and applied in oncology to simultaneously consider gains in survival and possible side effects of chemotherapies. With complete data, one can construct a U-statistic estimator for the Net Benefit and obtain its asymptotic distribution using standard results of the U-statistic theory. However, real data is often subject to right-censoring, e.g. patient drop-out in clinical trials. It is then possible to estimate the Net Benefit using a modified U-statistic, which involves the survival time. The latter can be seen as a nuisance parameter affecting the asymptotic distribution of the Net Benefit estimator. We present here how existing asymptotic results on U-statistics can be applied to estimate the distribution of the net benefit estimator, and assess their validity in finite samples. The methodology generalizes to other statistics obtained using generalized pairwise comparisons, such as the win ratio. It is implemented in the R package BuyseTest (version 2.3.0 and later) available on Comprehensive R Archive Network.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. e13543-e13543
Author(s):  
Enrique Barrajon ◽  
Antonio López Jiménez ◽  
Laura Barrajon

e13543 Background: Independent of the bias inherent to the design and execution of clinical trials, bias may be the result of patient censoring. A bias index (BI) was developed to detect right-censoring bias and tested in datasets availabe at Project Data Sphere, a data sharing research platform maintained by the CEO Roundtable on Cancer, Inc.,* a nonprofit corporation to improve outcomes for cancer patients by openly sharing deidentified data. Methods: Project Data Sphere platform was searched for clinical comparative trials with available experimental and comparator survival datasets: overall survival (OS) and event-free survival (EFS: disease-free survival, DFS, or progresssion-free survival, PFS). The R language and the integrated development environment Rstudio were used to import and manage the datasets. BI was defined in the events time domain as the adjusted proportion of censor times below the mean event time. Comparison of BI in different datasets were made with the two-sided Wilcoson unpaired test. A weighted regression model was applied to estimate the influence of bias on survival results as measured by the hazard ratio (HR). Results: Out of 184 trials, 19 trials offered both comparator and experimental arms, 3 of them not based on survival analysis and 4 of them with 2 substudies, providing 72 datasets based on OS and/or EFS, for a total of 16532 patients (90.8% of the 18198 patients in published trials). BI over the theshold was found in 24% of EFS datasets (versus 0% in OS datasets, Wilcoxon p = 0.0007), especially in PFS (35% vs 0% in DFS datasets, p = 0.00004). Nearly two thirds of the variance in the HR of EFS datasets was explained by the HR of OS datasets (adj.R2 = 0.638, p = 1.5e-5), approaching to what was found in the corresponding publications (adj.R2 = 0.751, p = 7.81e-5). Though the trials sample is small, introducing the BI of control and experimental datasets in the model decreases the residual standard error (3.831 vs 3.958) and increases the correlation (adj.R2 = 0.99, p < 2.2e-16), resulting in the model: HR(EFR) = 0.985 HR(OS) + 0.36 BI(exper) – 0.42 BI(control). Conclusions: This study is a proof of concept that right-censoring bias may be detected and estimated in clinical trials, especially in PFS datasets, and opens the possibility for correcting biased estimations in survival and increasing the precision in the prediction of OS from preliminary EFS. (*) This abstract is based on research using information obtained from ProjectDataSphere.org, which is maintained by Project Data Sphere LLC. Neither Project Data Sphere nor the owners of any information from the web site have contributed to, approved or are in any way responsible for the contents of this abstract.


2019 ◽  
Vol 10 (2) ◽  
pp. 569-579
Author(s):  
Aurélien Cottin ◽  
Benjamin Penaud ◽  
Jean-Christophe Glaszmann ◽  
Nabila Yahiaoui ◽  
Mathieu Gautier

Hybridizations between species and subspecies represented major steps in the history of many crop species. Such events generally lead to genomes with mosaic patterns of chromosomal segments of various origins that may be assessed by local ancestry inference methods. However, these methods have mainly been developed in the context of human population genetics with implicit assumptions that may not always fit plant models. The purpose of this study was to evaluate the suitability of three state-of-the-art inference methods (SABER, ELAI and WINPOP) for local ancestry inference under scenarios that can be encountered in plant species. For this, we developed an R package to simulate genotyping data under such scenarios. The tested inference methods performed similarly well as far as representatives of source populations were available. As expected, the higher the level of differentiation between ancestral source populations and the lower the number of generations since admixture, the more accurate were the results. Interestingly, the accuracy of the methods was only marginally affected by i) the number of ancestries (up to six tested); ii) the sample design (i.e., unbalanced representation of source populations); and iii) the reproduction mode (e.g., selfing, vegetative propagation). If a source population was not represented in the data set, no bias was observed in inference accuracy for regions originating from represented sources and regions from the missing source were assigned differently depending on the methods. Overall, the selected ancestry inference methods may be used for crop plant analysis if all ancestral sources are known.


2019 ◽  
Vol 62 (1) ◽  
pp. 9-18
Author(s):  
Wenting Wang ◽  
Wenting Wang ◽  
Shuiqing Yin ◽  
Yun Xie ◽  
Mark A. Nearing ◽  
...  

Abstract.Minimum inter-event time (MIT) is an index used to delineate independent storms from sub-daily rainfall records. An individual storm is defined as a period of rainfall with preceding and succeeding dry periods less than MIT. The exponential method was used to determine an appropriate MITexp for the eastern monsoon region of China based on observed 1-min resolution rainfall data from 18 stations. Results showed that dry periods between storms greater than MITexp followed an exponential distribution. MITexp values varied from 7.6 h to 16.6 h using 1-min precipitation data, which were statistically not different from values using hourly data at p = 0.05. At least ten years of records were necessary to obtain a stable MIT. Values of storm properties are sensitive to the change in MIT values, especially when MIT values are small. Average precipitation depths across all stations were 45% greater, durations were 84% longer, maximum 30-min intensities were 27% greater, and average rainfall intensities were 20% less when using an MIT of 10 h, the average value of MITexp over 18 stations, compared to 2 h. This indicates that more attention should be paid to the use of the MIT index as it relates to storm properties. Keywords: China, Exponential method, Minimum inter-event time, Storm, Storm property.


Biometrics ◽  
2019 ◽  
Vol 75 (2) ◽  
pp. 439-451 ◽  
Author(s):  
Nicole Barthel ◽  
Candida Geerdens ◽  
Claudia Czado ◽  
Paul Janssen

2010 ◽  
Vol 20 ◽  
pp. 684 ◽  
Author(s):  
Marta Abrusan

This paper offers a predictive mechanism to derive the presuppositions of verbs. The starting point is the intuition, dating back at least to Stalnaker (1974), that the information conveyed by a sentence that is in some sense independent from its main point is presupposed. The contribution of this paper is to spell out a mechanism for deciding what will become the main point of the sentence and how to calculate independence. It is proposed that this can be calculated by making reference to event times. As a very rough approximation, the main point of an utterance is what (in a sense to be defined) has to be about the event time of the matrix predicate and the information that the sentence conveys but is not (or does not have to be) about the event time of the matrix predicate is presupposed. The notion of aboutness used to calculate independence is based on Demolombe and Farinas del Cerro (2000).


2017 ◽  
Author(s):  
M. Umut Caglar ◽  
Ashley I. Teufel ◽  
Claus O Wilke

Sigmoidal and double-sigmoidal dynamics are commonly observed in many areas of biology. Here we present sicegar, an R package for the automated fitting and classification of sigmoidal and double-sigmodial data. The package categorizes data into one of three categories, "no signal", "sigmodial", or "double sigmoidal", by rigorously fitting a series of mathematical models to the data. The data is labeled as "ambiguous" if neither the sigmoidal nor double-sigmoidal model fit the data well. In addition to performing the classification, the package also reports a wealth of metrics as well as biologically meaningful parameters describing the sigmoidal or double-sigmoidal curves. In extensive simulations, we find that the package performs well, can recover the original dynamics even under fairly high noise levels, and will typically classify curves as "ambiguous" rather than misclassifying them. The package is available on CRAN and comes with extensive documentation and usage examples.


2019 ◽  
Author(s):  
Julian Karch

The amount of variance explained is widely reported for quantifying the model fit of a multiple linear regression model. The default adjusted R-squared estimator has the disadvantage of not being unbiased. The theoretically optimal Olkin-Pratt estimator is unbiased. Despite this, it is not being used due to being difficult to compute. In this paper, I present an algorithm for the exact and fast computation of the Olkin-Pratt estimator, which enables using it. I compare the Olkin-Pratt, the adjusted R-squared, and 18 alternative estimators using a simulation study. The metrics I use for comparison closely resemble established theoretical optimality properties. Importantly, the exact Olkin-Pratt estimator is shown to be optimal under the standard metric, which considers an estimator optimal if it has the least mean squared error among all unbiased estimators. Under the important alternative metric, which aims for the estimator with the lowest mean squared error, no optimal estimator could be identified. Based on these results, I carefully provide recommendations on when to use which estimator, which first and foremost depends on the choice of which metric is deemed most appropriate. If such a choice is infeasible, I recommend using the exact Olkin-Pratt instead of the default adjusted R-squared estimator. To facilitate this, I provide the R package altR2, which implements the Olkin-Pratt estimator as well as all other estimators.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Audrey Béliveau ◽  
Devon J. Boyne ◽  
Justin Slater ◽  
Darren Brenner ◽  
Paul Arora

Abstract Background Several reviews have noted shortcomings regarding the quality and reporting of network meta-analyses (NMAs). We suspect that this issue may be partially attributable to limitations in current NMA software which do not readily produce all of the output needed to satisfy current guidelines. Results To better facilitate the conduct and reporting of NMAs, we have created an R package called “BUGSnet” (Bayesian inference Using Gibbs Sampling to conduct a Network meta-analysis). This R package relies upon Just Another Gibbs Sampler (JAGS) to conduct Bayesian NMA using a generalized linear model. BUGSnet contains a suite of functions that can be used to describe the evidence network, estimate a model and assess the model fit and convergence, assess the presence of heterogeneity and inconsistency, and output the results in a variety of formats including league tables and surface under the cumulative rank curve (SUCRA) plots. We provide a demonstration of the functions contained within BUGSnet by recreating a Bayesian NMA found in the second technical support document composed by the National Institute for Health and Care Excellence Decision Support Unit (NICE-DSU). We have also mapped these functions to checklist items within current reporting and best practice guidelines. Conclusion BUGSnet is a new R package that can be used to conduct a Bayesian NMA and produce all of the necessary output needed to satisfy current scientific and regulatory standards. We hope that this software will help to improve the conduct and reporting of NMAs.


Sign in / Sign up

Export Citation Format

Share Document