Connections between translation, transcription and replication error-rates

Biochimie ◽  
1991 ◽  
Vol 73 (12) ◽  
pp. 1517-1523 ◽  
Author(s):  
J. Ninio
2016 ◽  
Vol 113 (39) ◽  
pp. E5765-E5774 ◽  
Author(s):  
Mohammed Al Mamun ◽  
Luca Albergante ◽  
Alberto Moreno ◽  
James T. Carrington ◽  
J. Julian Blow ◽  
...  

The replication of DNA is initiated at particular sites on the genome called replication origins (ROs). Understanding the constraints that regulate the distribution of ROs across different organisms is fundamental for quantifying the degree of replication errors and their downstream consequences. Using a simple probabilistic model, we generate a set of predictions on the extreme sensitivity of error rates to the distribution of ROs, and how this distribution must therefore be tuned for genomes of vastly different sizes. As genome size changes from megabases to gigabases, we predict that regularity of RO spacing is lost, that large gaps between ROs dominate error rates but are heavily constrained by the mean stalling distance of replication forks, and that, for genomes spanning ∼100 megabases to ∼10 gigabases, errors become increasingly inevitable but their number remains very small (three or less). Our theory predicts that the number of errors becomes significantly higher for genome sizes greater than ∼10 gigabases. We test these predictions against datasets in yeast, Arabidopsis, Drosophila, and human, and also through direct experimentation on two different human cell lines. Agreement of theoretical predictions with experiment and datasets is found in all cases, resulting in a picture of great simplicity, whereby the density and positioning of ROs explain the replication error rates for the entire range of eukaryotes for which data are available. The theory highlights three domains of error rates: negligible (yeast), tolerable (metazoan), and high (some plants), with the human genome at the extreme end of the middle domain.


2013 ◽  
Vol 94 (4) ◽  
pp. 817-830 ◽  
Author(s):  
Armando Arias ◽  
Ana Isabel de Ávila ◽  
Marta Sanz-Ramos ◽  
Rubén Agudo ◽  
Cristina Escarmís ◽  
...  

Low fidelity replication and the absence of error-repair activities in RNA viruses result in complex and adaptable ensembles of related genomes in the viral population, termed quasispecies, with important implications for natural infections. Theoretical predictions suggested that elevated replication error rates in RNA viruses might be near to a maximum compatible with viral viability. This fact encouraged the use of mutagenic nucleosides as a new antiviral strategy to induce viral extinction through increased replication error rates. Despite extensive evidence of lethal mutagenesis of RNA viruses by different mutagenic compounds, a detailed picture of the infectivity of individual genomes and its relationship with the mutations accumulated is lacking. Here, we report a molecular analysis of a foot-and-mouth disease virus population previously subjected to heavy mutagenesis to determine whether a correlation between increased mutagenesis and decreased fitness existed. Plaque-purified viruses isolated from a ribavirin-treated quasispecies presented decreases of up to 200-fold in infectivity relative to clones in the reference population, associated with an overall eightfold increase in the mutation frequency. This observation suggests that individual infectious genomes of a quasispecies subjected to increased mutagenesis lose infectivity by their continuous mutagenic ‘poisoning’. These results support the lethal defection model of virus extinction and the practical use of chemical mutagens as antiviral treatment. Even when extinction is not achieved, mutagenesis can decrease the infectivity of surviving virus, and facilitate their clearance by host immune responses or complementing antiviral approaches.


2019 ◽  
Vol 28 (4) ◽  
pp. 1411-1431 ◽  
Author(s):  
Lauren Bislick ◽  
William D. Hula

Purpose This retrospective analysis examined group differences in error rate across 4 contextual variables (clusters vs. singletons, syllable position, number of syllables, and articulatory phonetic features) in adults with apraxia of speech (AOS) and adults with aphasia only. Group differences in the distribution of error type across contextual variables were also examined. Method Ten individuals with acquired AOS and aphasia and 11 individuals with aphasia participated in this study. In the context of a 2-group experimental design, the influence of 4 contextual variables on error rate and error type distribution was examined via repetition of 29 multisyllabic words. Error rates were analyzed using Bayesian methods, whereas distribution of error type was examined via descriptive statistics. Results There were 4 findings of robust differences between the 2 groups. These differences were found for syllable position, number of syllables, manner of articulation, and voicing. Group differences were less robust for clusters versus singletons and place of articulation. Results of error type distribution show a high proportion of distortion and substitution errors in speakers with AOS and a high proportion of substitution and omission errors in speakers with aphasia. Conclusion Findings add to the continued effort to improve the understanding and assessment of AOS and aphasia. Several contextual variables more consistently influenced breakdown in participants with AOS compared to participants with aphasia and should be considered during the diagnostic process. Supplemental Material https://doi.org/10.23641/asha.9701690


2020 ◽  
Vol 36 (2) ◽  
pp. 296-302 ◽  
Author(s):  
Luke J. Hearne ◽  
Damian P. Birney ◽  
Luca Cocchi ◽  
Jason B. Mattingley

Abstract. The Latin Square Task (LST) is a relational reasoning paradigm developed by Birney, Halford, and Andrews (2006) . Previous work has shown that the LST elicits typical reasoning complexity effects, such that increases in complexity are associated with decrements in task accuracy and increases in response times. Here we modified the LST for use in functional brain imaging experiments, in which presentation durations must be strictly controlled, and assessed its validity and reliability. Modifications included presenting the components within each trial serially, such that the reasoning and response periods were separated. In addition, the inspection time for each LST problem was constrained to five seconds. We replicated previous findings of higher error rates and slower response times with increasing relational complexity and observed relatively large effect sizes (η2p > 0.70, r > .50). Moreover, measures of internal consistency and test-retest reliability confirmed the stability of the LST within and across separate testing sessions. Interestingly, we found that limiting the inspection time for individual problems in the LST had little effect on accuracy relative to the unconstrained times used in previous work, a finding that is important for future brain imaging experiments aimed at investigating the neural correlates of relational reasoning.


Author(s):  
Manuel Perea ◽  
Victoria Panadero

The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word’s overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children – this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word’s visual cues, presumably because of poor letter representations.


2010 ◽  
Author(s):  
Jennifer M. Chen ◽  
Raj M. Ratwani ◽  
J. Gregory Trafton
Keyword(s):  

1975 ◽  
Vol 14 (01) ◽  
pp. 32-34
Author(s):  
Elisabeth Schach

Data reporting the experience with an optical mark page reader is presented (IBM 1231Ν1). Information from 52,000 persons was gathered in seven countries, decentrally coded and centrally processed. Reader performance rates (i.e. sheets read per hour, sheet rejection rates, reading error rates) and costs (coding, verification, reading, etc.) are given.


Sign in / Sign up

Export Citation Format

Share Document