scholarly journals Inevitability and containment of replication errors for eukaryotic genome lengths spanning megabase to gigabase

2016 ◽  
Vol 113 (39) ◽  
pp. E5765-E5774 ◽  
Author(s):  
Mohammed Al Mamun ◽  
Luca Albergante ◽  
Alberto Moreno ◽  
James T. Carrington ◽  
J. Julian Blow ◽  
...  

The replication of DNA is initiated at particular sites on the genome called replication origins (ROs). Understanding the constraints that regulate the distribution of ROs across different organisms is fundamental for quantifying the degree of replication errors and their downstream consequences. Using a simple probabilistic model, we generate a set of predictions on the extreme sensitivity of error rates to the distribution of ROs, and how this distribution must therefore be tuned for genomes of vastly different sizes. As genome size changes from megabases to gigabases, we predict that regularity of RO spacing is lost, that large gaps between ROs dominate error rates but are heavily constrained by the mean stalling distance of replication forks, and that, for genomes spanning ∼100 megabases to ∼10 gigabases, errors become increasingly inevitable but their number remains very small (three or less). Our theory predicts that the number of errors becomes significantly higher for genome sizes greater than ∼10 gigabases. We test these predictions against datasets in yeast, Arabidopsis, Drosophila, and human, and also through direct experimentation on two different human cell lines. Agreement of theoretical predictions with experiment and datasets is found in all cases, resulting in a picture of great simplicity, whereby the density and positioning of ROs explain the replication error rates for the entire range of eukaryotes for which data are available. The theory highlights three domains of error rates: negligible (yeast), tolerable (metazoan), and high (some plants), with the human genome at the extreme end of the middle domain.

2020 ◽  
Author(s):  
Jeff Miller

Contrary to the warning of Miller (1988), Rousselet and Wilcox (2020) argued that it is better to summarize each participant’s single-trial reaction times (RTs) in a given condition with the median than with the mean when comparing the central tendencies of RT distributions across experimental conditions. They acknowledged that median RTs can produce inflated Type I error rates when conditions differ in the number of trials tested, consistent with Miller’s warning, but they showed that the bias responsible for this error rate inflation could be eliminated with a bootstrap bias correction technique. The present simulations extend their analysis by examining the power of bias-corrected medians to detect true experimental effects and by comparing this power with the power of analyses using means and regular medians. Unfortunately, although bias-corrected medians solve the problem of inflated Type I error rates, their power is lower than that of means or regular medians in many realistic situations. In addition, even when conditions do not differ in the number of trials tested, the power of tests (e.g., t-tests) is generally lower using medians rather than means as the summary measures. Thus, the present simulations demonstrate that summary means will often provide the most powerful test for differences between conditions, and they show what aspects of the RT distributions determine the size of the power advantage for means.


2005 ◽  
Vol 23 (3) ◽  
pp. 827-830 ◽  
Author(s):  
G. W. Prölss

Abstract. A prominent peak in the electron temperature of the topside ionosphere is observed beneath the magnetospheric cleft. The present study uses DE-2 data obtained in the Northern Winter Hemisphere to investigate this phenomenon. First, the dependence of the location and magnitude of the temperature peak on the magnetic activity is determined. Next, using a superposed epoch analysis, the mean latitudinal profile of the temperature enhancement is derived. The results of the present study are compared primarily with those obtained by Titheridge (1976), but also with more recent observations and theoretical predictions.


1990 ◽  
Vol 112 (1) ◽  
pp. 114-120 ◽  
Author(s):  
H. Ounis ◽  
G. Ahmadi

The equation of motion of a small spherical rigid particle in a turbulent flow field, including the Stokes drag, the Basset force, and the virtual mass effects, is considered. For an isotropic field, the lift force and the velocity gradient effects are neglected. Using the spectral method, responses of the resulting constant coefficient stochastic integrao-differential equation are studied. Analytical expressions relating the Lagrangian energy spectra of particle velocity to that of the fluid are developed and the results are used to evaluate various response statistics. Variations of the mean-square particle velocity and particle diffusivity with size, density ratio and response time are studied. The theoretical predictions are compared with the digital simulation results and the available data and good agreement is observed.


SIMULATION ◽  
2020 ◽  
Vol 97 (1) ◽  
pp. 33-43
Author(s):  
Jack P C Kleijnen ◽  
Wen Shi

Because computers (except for parallel computers) generate simulation outputs sequentially, we recommend sequential probability ratio tests (SPRTs) for the statistical analysis of these outputs. However, until now simulation analysts have ignored SPRTs. To change this situation, we review SPRTs for the simplest case; namely, the case of choosing between two hypothesized values for the mean simulation output. For this case, the classic SPRT of Wald (Wald A. Sequential tests of statistical hypotheses. Ann Math Stat 1945; 16: 117–186) allows general types of distribution, including normal distributions with known variances. A modification permits unknown variances that are estimated. Hall (Hall WJ. Some sequential analogs of Stein’s two-stage test. Biometrika 1962; 49: 367–378) developed a SPRT that assumes normal distributions with unknown variances estimated from a pilot sample. A modification uses a fully sequential variance estimator. In this paper, we quantify the performance of the various SPRTs, using several Monte Carlo experiments. In experiment #1, simulation outputs are normal. Whereas Wald’s SPRT with estimated variance gives too high error rates, Hall’s original and modified SPRTs are “conservative”; that is, the actual error rates are smaller than those prespecified (nominal). Furthermore, our experiment shows that the most efficient SPRT is Hall’s modified SPRT. In experiment #2, we estimate the robustness of these SPRTs for non-normal output. For these two experiments, we provide details on their design and analysis; these details may also be useful for simulation experiments in general.


2020 ◽  
Vol 8 (3) ◽  
pp. 232596712091009
Author(s):  
Jonathan Bourget-Murray ◽  
Ariana Frederick ◽  
Lisa Murphy ◽  
Jacqui French ◽  
Shane Barwood ◽  
...  

Background: The American Shoulder and Elbow Surgeons (ASES) score is a patient-reported outcome (PRO) questionnaire developed to facilitate communication among international investigators and to allow comparison of outcomes for patients with shoulder disabilities. Although this PRO measure has been deemed easy to read and understand, patients may make mistakes when completing the questionnaire. Purpose: To evaluate the frequency of potential mistakes made by patients completing the ASES score. Study Design: Cross-sectional study; Level of evidence, 3. Methods: A prospective cross-sectional study was performed for 600 ASES questionnaires completed by patients upon their first visit to 1 of 2 clinic locations (Australian vs Canadian site). Two categories of potential errors were predefined, and then differences in error rates were compared based on demographics (age, sex, and location). To determine whether these methods were reliable, an independent, third reviewer evaluated a subset of questionnaires separately. The interrater reliability was evaluated through use of the Cohen kappa. Results: The mean patient age was 49.9 years, and 63% of patients were male. The Cohen kappa was high for both evaluation methods used, at 0.831 and 0.918. On average, 17.9% of patients made at least 1 potential mistake, while an additional 10.4% of patients corrected their own mistakes. No differences in total error rate were found based on baseline demographics. Canadians and Australians had similar rates of error. Conclusion: To ensure the accuracy of the ASES score, this questionnaire should be double checked, as potential mistakes are too frequently made. This attentiveness will ensure that the ASES score remains a valid, reliable, and responsive tool to be used for further shoulder research.


2015 ◽  
Vol 29 (06n07) ◽  
pp. 1540020
Author(s):  
Dong Myung Lee ◽  
Tae Wan Kim ◽  
Yun-Hae Kim

In this paper, we propose a localization simulator based on the random walk/waypoint mobility model and a hybrid-type location–compensation algorithm using the Mean kShift/Kalman filter (MSKF) to enhance the precision of the estimated location value of mobile modules. From an analysis of our experimental results, the proposed algorithm using the MSKF can better compensate for the error rates, the average error rate per estimated distance moved by the mobile node ( Err _ Rate DV ) and the error rate per estimated trace value of the mobile node ( Err _ Rate TV ) than the Mean shift or Kalman filter up to a maximum of 29% in a random mobility environment for the three scenarios.


2017 ◽  
Vol 42 (3) ◽  
pp. 251-263 ◽  
Author(s):  
Irina Grabovsky ◽  
Howard Wainer

In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss function. One strength of the Cut-Score Operating Function is that it shows how robust error rates are to the choice of cut-score and identifies the regions of extreme sensitivity relative to that choice.


2018 ◽  
Vol 84 (3) ◽  
Author(s):  
Hongzhe Zhou ◽  
Eric G. Blackman ◽  
Luke Chamandy

Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.


Sign in / Sign up

Export Citation Format

Share Document