length bias
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 10)

H-INDEX

16
(FIVE YEARS 2)

Children ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 39
Author(s):  
Anke Hua ◽  
Jingyuan Bai ◽  
Yong Fan ◽  
Jian Wang

The study aimed to (1) investigate the reliability and usefulness of a proposed angular analysis during a modified sit-and-reach (MSR) test, and (2) compare the proposed MSR angular analysis and the commonly used MSR distance to verify the influence of the anthropometric characteristics in preschoolers. 194 preschoolers participated in the study. Before testing, the anthropometric characteristics were collected. Each participant performed the MSR test twice. The MSR distance score was obtained from the starting point to the reaching point, while the MSR angle score was calculated according to the approximate hip flexion angle. Both the relative and absolute reliability were good for the angular analysis during an MSR test in preschoolers (ICC ranging from 0.82 to 0.91, CV% ranging from 8.21 to 9.40). The angular analysis demonstrated good usefulness, with a lower typical error than the smallest worthwhile change in 3- and 5-year-old groups. The MSR angle scores could eliminate the concern of the influence of anthropometric characteristics, while MSR distance and anthropometric characteristics (i.e., sitting height and arm length) were found to be weakly correlated. In conclusion, the angular analysis when performing the MSR test is reliable and appears to eliminate the concern regarding the limb length bias.


2021 ◽  
Vol 2 ◽  
Author(s):  
Lawrence M. Murray ◽  
Sumeetpal S. Singh ◽  
Anthony Lee

Abstract Monte Carlo algorithms simulates some prescribed number of samples, taking some random real time to complete the computations necessary. This work considers the converse: to impose a real-time budget on the computation, which results in the number of samples simulated being random. To complicate matters, the real time taken for each simulation may depend on the sample produced, so that the samples themselves are not independent of their number, and a length bias with respect to compute time is apparent. This is especially problematic when a Markov chain Monte Carlo (MCMC) algorithm is used and the final state of the Markov chain—rather than an average over all states—is required, which is the case in parallel tempering implementations of MCMC. The length bias does not diminish with the compute budget in this case. It also occurs in sequential Monte Carlo (SMC) algorithms, which is the focus of this paper. We propose an anytime framework to address the concern, using a continuous-time Markov jump process to study the progress of the computation in real time. We first show that for any MCMC algorithm, the length bias of the final state’s distribution due to the imposed real-time computing budget can be eliminated by using a multiple chain construction. The utility of this construction is then demonstrated on a large-scale SMC $ {}^2 $ implementation, using four billion particles distributed across a cluster of 128 graphics processing units on the Amazon EC2 service. The anytime framework imposes a real-time budget on the MCMC move steps within the SMC $ {}^2 $ algorithm, ensuring that all processors are simultaneously ready for the resampling step, demonstrably reducing idleness to due waiting times and providing substantial control over the total compute budget.


BMC Genomics ◽  
2019 ◽  
Vol 20 (S12) ◽  
Author(s):  
Tong Liu ◽  
Zheng Wang

Abstract Background The genome architecture mapping (GAM) technique can capture genome-wide chromatin interactions. However, besides the known systematic biases in the raw GAM data, we have found a new type of systematic bias. It is necessary to develop and evaluate effective normalization methods to remove all systematic biases in the raw GAM data. Results We have detected a new type of systematic bias, the fragment length bias, in the genome architecture mapping (GAM) data, which is significantly different from the bias of window detection frequency previously mentioned in the paper introducing the GAM method but is similar to the bias of distances between restriction sites existing in raw Hi-C data. We have found that the normalization method (a normalized variant of the linkage disequilibrium) used in the GAM paper is not able to effectively eliminate the new fragment length bias at 1 Mb resolution (slightly better at 30 kb resolution). We have developed an R package named normGAM for eliminating the new fragment length bias together with the other three biases existing in raw GAM data, which are the biases related to window detection frequency, mappability, and GC content. Five normalization methods have been implemented and included in the R package including Knight-Ruiz 2-norm (KR2, newly designed by us), normalized linkage disequilibrium (NLD), vanilla coverage (VC), sequential component normalization (SCN), and iterative correction and eigenvector decomposition (ICE). Conclusions Based on our evaluations, the five normalization methods can eliminate the four biases existing in raw GAM data, with VC and KR2 performing better than the others. We have observed that the KR2-normalized GAM data have a higher correlation with the KR-normalized Hi-C data on the same cell samples indicating that the KR-related methods are better than the others for keeping the consistency between the GAM and Hi-C experiments. Compared with the raw GAM data, the normalized GAM data are more consistent with the normalized distances from the fluorescence in situ hybridization (FISH) experiments. The source code of normGAM can be freely downloaded from http://dna.cs.miami.edu/normGAM/.


PLoS Biology ◽  
2019 ◽  
Vol 17 (11) ◽  
pp. e3000481 ◽  
Author(s):  
Shir Mandelboum ◽  
Zohar Manber ◽  
Orna Elroy-Stein ◽  
Ran Elkon

2019 ◽  
Vol 29 (2) ◽  
pp. 374-395 ◽  
Author(s):  
Linda Abrahamsson ◽  
Gabriel Isheden ◽  
Kamila Czene ◽  
Keith Humphreys

Comparisons of survival times between screen-detected and symptomatically detected breast cancer cases are subject to lead time and length biases. Whilst the existence of these biases is well known, correction procedures for these are not always clear, as are not the interpretation of these biases. In this paper we derive, based on a recently developed continuous tumour growth model, conditional lead time distributions, using information on each individual's tumour size, screening history and percent mammographic density. We show how these distributions can be used to obtain an individual-based (conditional) procedure for correcting survival comparisons. In stratified analyses, our correction procedure works markedly better than a previously used unconditional lead time correction, based on multi-state Markov modelling. In a study of postmenopausal invasive breast cancer patients, we estimate that, in large (>12 mm) tumours, the multi-state Markov model correction over-corrects five-year survival by 2–3 percentage points. The traditional view of length bias is that tumours being present in a woman's breast for a long time, due to being slow-growing, have a greater chance of being screen-detected. This gives a survival advantage for screening cases which is not due to the earlier detection by screening. We use simulated data to share the new insight that, not only the tumour growth rate but also the symptomatic tumour size will affect the sampling procedure, and thus be a part of the length bias through any link between tumour size and survival. We explain how this has a bearing on how observable breast cancer-specific survival curves should be interpreted. We also propose an approach for correcting survival comparisons for the length bias.


Sign in / Sign up

Export Citation Format

Share Document