parametric bootstrapping
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 17)

H-INDEX

9
(FIVE YEARS 3)

Author(s):  
John J Cronin ◽  
Allan M Zarembski ◽  
Joseph W Palese

The railroad industry has historically used the 2-Parameter Weibull equation to determine the rate of rail fatigue defect occurrences and to forecast the fatigue life of railroad rail. However, the 2-Parameter Weibull equation has significant limitations to include inability to analyze segments of track with limited number of rail defects. These limitations are addressed through modification of the traditional 2-Parameter Weibull equation with a novel approach developed from Parametric Bootstrapping. The result is a Parametric Bootstrapping modified Weibull (PBW) forecasting approach. This methodology is applied to rail segments with insufficient numbers of defects to allow for appropriate defect forecasting analysis. Thus, the PBW method provides reasonable estimates of the rate of defects for track segments that have little or no prior defect history. This approach allows for more track to be analyzed and forecasts the probability of rail defect occurrence as a function of key parameters such as cumulative traffic over the rail. A validation of the proposed methodology was performed. Comparison of the output results of over 300,000 track segments with over 200,000 rail defects showed a major improvement in percentage of segments with reasonable Weibull parameters (alpha and beta). This percentage increased from 11% of segments using traditional Weibull analysis to 77% of segments using Parametric Bootstrap modified Weibull approach. These results show that the PBW Analysis approach introduced here offers a more accurate and effective approach to determining the probability of developing future rail defects. This provides a benefit to railroads in planning maintenance of their expensive rail assets.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 6568-6568
Author(s):  
Robert J. Motzer ◽  
Toni K. Choueiri ◽  
Jessica May ◽  
Youngmin Kwon ◽  
Nifasha Rusibamayila ◽  
...  

6568 Background: After a minimum follow-up of 48 months (mos), the CheckMate 214 trial (phase 3, NCT02231749) continued to demonstrate a significant overall (OS) and progression-free (PFS) survival benefit for N+I vs. SUN in aRCC patients (pts) with intermediate (I) or poor (P) International Metastatic RCC Database Consortium (IMDC) risk factors (median OS: 48.1 vs. 26.6 mos, HR: 0.65, 95% confidence interval [95% CI]: 0.54, 0.78; 48-mos PFS: 32.7% vs. 12.3%, HR: 0.74, 95% CI: 0.62, 0.88) (Albiges et al. ESMO Open 2020). To further understand the clinical benefits and risks of N+I vs. SUN, we evaluated the Q-TWiST over time using up to 57 mos of follow-up in CheckMate 214. Methods: OS was partitioned into 3 states: time with any grade 3 or 4 adverse events (TOX), time without symptoms of disease or toxicity (TWiST), and time after progression (REL). The Q-TWiST is a metric that combines the quantity and quality (i.e., “utility”) of time spent in each of the 3 states TWiST, TOX, and REL. Prior research (Revicki et al, Qual Life Res, 2006) has established that relative gains in Q-TWiST (i.e., Q-TWiST gain divided by OS in SUN) of ≥ 10% and ≥ 15% can be considered as “clinically important” and “clearly clinically important”, respectively. Non-parametric bootstrapping was used to generate 95% CIs. To observe changes in quality-adjusted survival gains over time, absolute and relative Q-TWiST were calculated up to 57 mos at intervals of 12-mos. Results: With 57-mos follow-up, compared to SUN pts, N+I pts (N = 847) had significantly longer time in TWiST state (+7.1 mos [95% CI: 4.2, 10.4]). The between-group differences in TOX state (0.3 mos [95% CI: -0.2, 0.8]) and REL state (-1.2 mos [95% CI: -4.1, 1.5]) were not statistically significant. The Q-TWiST gain in the N+I vs. SUN arms was 6.6 mos (95% CI: 4.1, 9.4), resulting in a 21.2% relative gain. Q-TWiST gains progressively increased over the follow-up period and exceeded the “clinically important” threshold around 27 mos (Table). These gains were driven by steady increases in TWiST gains from 0.4 mos (after 12 mos) to 7.1 mos (after 57 mos). Conclusions: In CheckMate 214, N+I resulted in a statistically significant and “clearly clinically important (≥ 15%)” longer quality-adjusted survival vs. SUN, which increased over the longer follow-up time. Q-TWiST gains were primarily driven by time in “good” health (i.e., TWiST), which largely resulted from the long-term PFS benefits seen for N+I vs. SUN. Clinical trial information: NCT02231749. [Table: see text]


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xiaofeng Steven Liu

Abstract Objectives We introduce a simple and unified methodology to estimate the bias of Pearson correlation coefficients, partial correlation coefficients, and semi-partial correlation coefficients. Methods Our methodology features non-parametric bootstrapping and can accommodate small sample data without making any distributional assumptions. Results Two examples with R code are provided to illustrate the computation. Conclusions The computation strategy is easy to implement and remains the same, be it Pearson correlation or partial or semi-partial correlation.


2020 ◽  
Vol 223 (22) ◽  
pp. jeb233254
Author(s):  
Adriana P. Rebolledo ◽  
Carla M. Sgrò ◽  
Keyne Monro

ABSTRACTUnderstanding thermal performance at life stages that limit persistence is necessary to predict responses to climate change, especially for ectotherms whose fitness (survival and reproduction) depends on environmental temperature. Ectotherms often undergo stage-specific changes in size, complexity and duration that are predicted to modify thermal performance. Yet performance is mostly explored for adults, while performance at earlier stages that typically limit persistence remains poorly understood. Here, we experimentally isolate thermal performance curves at fertilization, embryo development and larval development stages in an aquatic ectotherm whose early planktonic stages (gametes, embryos and larvae) govern adult abundances and dynamics. Unlike previous studies based on short-term exposures, responses with unclear links to fitness or proxies in lieu of explicit curve descriptors (thermal optima, limits and breadth), we measured performance as successful completion of each stage after exposure throughout, and at temperatures that explicitly capture curve descriptors at all stages. Formal comparisons of descriptors using a combination of generalized linear mixed modelling and parametric bootstrapping reveal important differences among life stages. Thermal performance differs significantly from fertilization to embryo development (with thermal optimum declining by ∼2°C, thermal limits shifting inwards by ∼8–10°C and thermal breadth narrowing by ∼10°C), while performance declines independently of temperature thereafter. Our comparisons show that thermal performance at one life stage can misrepresent performance at others, and point to gains in complexity during embryogenesis, rather than subsequent gains in size or duration of exposure, as a key driver of thermal sensitivity in early life.


2020 ◽  
Author(s):  
Jeffrey N Chiang ◽  
Ulzee An ◽  
Misagh Kordi ◽  
Brandon Jew ◽  
Clifford Kravit ◽  
...  

During the initial wave of the COVID-19 pandemic in the United States, hospitals took drastic action to ensure sufficient capacity, including canceling or postponing elective procedures, expanding the number of available intensive care beds and ventilators, and creating regional overflow hospital capacity. However, in most locations the actual number of patients did not reach the projected surge leaving available, unused hospital capacity. As a result, patients may have delayed needed care and hospitals lost substantial revenue. These initial recommendations were made based on observations and worst-case epidemiological projections, which generally assume a fixed proportion of COVID-19 patients will require hospitalization and advanced resources. This assumption has led to an overestimate of resource demand as clinical protocols improve and testing becomes more widely available throughout the course of the pandemic. Here, we present a parametric bootstrap model for forecasting the resource demands of incoming patients in the near term, and apply it to the current pandemic. We validate our approach using observed cases at UCLA Health and simulate the effect of elective procedure cancellation against worst-case pandemic scenarios. Using our approach, we show that it is unnecessary to cancel elective procedures unless the actual capacity of COVID-19 patients approaches the hospital maximum capacity. Instead, we propose a strategy of balancing the resource demands of elective procedures against projected patients by revisiting the projections regularly to maintain operating efficiency. This strategy has been in place at UCLA Health since mid-April.


2020 ◽  
Vol 37 (11) ◽  
pp. 3353-3362
Author(s):  
Peter B Chi ◽  
Westin M Kosater ◽  
David A Liberles

Abstract There are known limitations in methods of detecting positive selection. Common methods do not enable differentiation between positive selection and compensatory covariation, a major limitation. Further, the traditional method of calculating the ratio of nonsynonymous to synonymous substitutions (dN/dS) does not take into account the 3D structure of biomacromolecules nor differences between amino acids. It also does not account for saturation of synonymous mutations (dS) over long evolutionary time that renders codon-based methods ineffective for older divergences. This work aims to address these shortcomings for detecting positive selection through the development of a statistical model that examines clusters of substitutions in clusters of variable radii. Additionally, it uses a parametric bootstrapping approach to differentiate positive selection from compensatory processes. A previously reported case of positive selection in the leptin protein of primates was reexamined using this methodology.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
F. Zertuche ◽  
A. Meza-Peñaloza

AbstractFor more than 50 years the Mean Measure of Divergence (MMD) has been one of the most prominent tools used in anthropology for the study of non-metric traits. However, one of the problems, in anthropology including palaeoanthropology (more often there), is the lack of big enough samples or the existence of samples without sufficiently measured traits. Since 1969, with the advent of bootstrapping techniques, this issue has been tackled successfully in many different ways. Here, we present a parametric bootstrap technique based on the fact that the transformed θ, obtained from the Anscombe transformation to stabilize the variance, nearly follows a normal distribution with standard deviation $\sigma = 1 / \sqrt{N + 1/2}$, where N is the size of the measured trait. When the probabilistic distribution is known, parametric procedures offer more powerful results than non-parametric ones. We profit from knowing the probabilistic distribution of θ to develop a parametric bootstrapping method. We explain it carefully with mathematical support. We give examples, both with artificial data and with real ones. Our results show that this parametric bootstrap procedure is a powerful tool to study samples with scarcity of data.


2020 ◽  
Vol 36 (9) ◽  
pp. 2907-2908 ◽  
Author(s):  
Stilianos Louca

Abstract Motivation The birth-death (BD) model constitutes the theoretical backbone of most phylogenetic tools for reconstructing speciation/extinction dynamics over time. Performing simulations of reconstructed trees (linking extant taxa) under the BD model in backward time, conditioned on the number of species sampled at present day and, in some cases, a specific time interval since the most recent common ancestor (MRCA), is needed for assessing the performance of reconstruction tools, for parametric bootstrapping and for detecting data outliers. The few simulation tools that exist scale poorly to large modern phylogenies, which can comprise thousands or even millions of tips (and rising). Results Here I present efficient software for simulating reconstructed phylogenies under time-dependent BD models in backward time, conditioned on the number of sampled species and (optionally) on the time since the MRCA. On large trees, my software is 1000–10 000 times faster than existing tools. Availability and implementation The presented software is incorporated into the R package ‘castor’, which is available on The Comprehensive R Archive Network (CRAN). Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Author(s):  
Sean Gosselin ◽  
Matthew S. Fullmer ◽  
Yutian Feng ◽  
Johann Peter Gogarten

AbstractWhole genome comparisons based on Average Nucleotide Identities (ANI), and the Genome-to-genome distance calculator have risen to prominence in rapidly classifying taxa using whole genome sequences. Some implementations have even been proposed as a new standard in species classification and have become a common technique for papers describing newly sequenced genomes. However, attempts to apply whole genome divergence data to delineation of higher taxonomic units, and to phylogenetic inference have had difficulty matching those produced by more complex phylogenetics methods. We present a novel method for generating reliable and statistically supported phylogenies using established ANI techniques. For the test cases to which we applied the developed approach we obtained accurate results up to at least the family level. The developed method uses non-parametric bootstrapping to gauge reliability of inferred groups. This method offers the opportunity make use of whole-genome comparison data that is already being generated to quickly produce accurate phylogenies. Additionally, the developed ANI methodology can assist classification of higher order taxonomic groups.Significance StatementThe average nucleotide identity (ANI) measure and its iterations have come to dominate in-silico species delimitation in the past decade. Yet the problem of gene content has not been fully resolved, and attempts made to do so contain two metrics which makes interpretation difficult at times. We provide a new single based ANI metric created from the combination of genomic content and genomic identity measures. Our results show that this method can handle comparisons of genomes with divergent content or identity. Additionally, the metric can be used to create distance based phylogenetic trees that are comparable to other tree building methods, while also providing a tentative metric for categorizing organisms into higher level taxonomic classifications.


Sign in / Sign up

Export Citation Format

Share Document