scholarly journals Controversies in modern evolutionary biology: the imperative for error detection and quality control

BMC Genomics ◽  
2012 ◽  
Vol 13 (1) ◽  
pp. 5 ◽  
Author(s):  
Francisco Prosdocimi ◽  
Benjamin Linard ◽  
Pierre Pontarotti ◽  
Olivier Poch ◽  
Julie D Thompson
1992 ◽  
Vol 38 (3) ◽  
pp. 364-369 ◽  
Author(s):  
C A Parvin

Abstract This paper continues an investigation into the merits of an alternative approach to the statistical evaluation of quality-control rules. In this report, computer simulation is used to evaluate and compare quality-control rules designed to detect increases in within-run or between-run imprecision. When out-of-control conditions are evaluated in terms of their impact on total analytical imprecision, the error detection ability of a rule depends on the relative magnitudes of the between-run and within-run error components under stable operating conditions. A recently proposed rule based on the F-test, designed to detect increases in between-run imprecision, is shown to have relatively poor performance characteristics. Additionally, several issues are examined that have been difficult to address with the traditional evaluation approach.


1992 ◽  
Vol 38 (2) ◽  
pp. 204-210 ◽  
Author(s):  
Aristides T Hatjimihail

Abstract I have developed an interactive microcomputer simulation program for the design, comparison, and evaluation of alternative quality-control (QC) procedures. The program estimates the probabilities for rejection under different conditions of random and systematic error when these procedures are used and plots their power function graphs. It also estimates the probabilities for detection of critical errors, the defect rate, and the test yield. To allow a flexible definition of the QC procedures, it includes an interpreter. Various characteristics of the analytical process and the QC procedure can be user-defined. The program extends the concepts of the probability for error detection and of the power function to describe the results of the introduction of error between runs and within a run. The usefulness of this approach is illustrated with some examples.


Author(s):  
Eric S Kilpatrick

Background Even when a laboratory analyte testing process is in control, routine quality control testing will fail with a frequency that can be predicted by the number of quality control levels used, the run frequency and the control rule employed. We explored whether simply counting the number of assay quality control run failures during a running week, and then objectively determining if there was an excess, could complement daily quality control processes in identifying an out-of-control assay. Methods Binomial statistics were used to determine the threshold number of quality control run failures in any rolling week which would statistically exceed that expected for a particular test. Power function graphs were used to establish error detection (Ped) and false rejection rates compared with popular control rules. Results Identifying quality control failures exceeding the weekly limit (QC FEWL) is a more powerful means of detecting smaller systematic (bias) errors than traditional daily control rules (12s, 13s or 13s/22s/R4s) and markedly superior in detecting smaller random (imprecision) errors while maintaining false identification rates below 2%. Error detection rates also exceeded those using a within- and between-run Westgard multirule (13s/22s/41s/10x). Conclusions Daily review of tests shown to statistically exceed their rolling week limit of expected quality control run failures is more powerful than traditional quality control tools at identifying potential systematic and random test errors and so offers a supplement to daily quality control practices that has no requirement for complex data extraction or manipulation.


Author(s):  
Joel D Smith ◽  
Tony Badrick ◽  
Francis Bowling

Background Patient-based real-time quality control (PBRTQC) techniques have been described in clinical chemistry for over 50 years. PBRTQC has a number of advantages over traditional quality control including commutability, cost and the opportunity for real-time monitoring. However, there are few systematic investigations assessing how different PBRTQC techniques perform head-to-head. Methods In this study, we compare moving averages with and without truncation and moving medians. For analytes with skewed distributions such as alanine aminotransferase and creatinine, we also investigate the effect of Box–Cox transformation of the data. We assess the ability of each technique to detect simulated analytical bias in real patient data for multiple analytes and to retrospectively detect a real analytical shift in a creatinine and urea assay. Results For analytes with symmetrical distributions, we show that error detection is similar for a moving average with and without four standard deviation truncation limits and for a moving median. In contrast to analytes with symmetrically distributed results, moving averages perform poorly for right skewed distributions such as alanine aminotransferase and creatinine and function only with a tight upper truncation limit. Box–Cox transformation of the data both improves the performance of moving averages and allows all data points to be used. This was also confirmed for retrospective detection of a real analytical shift in creatinine and urea. Conclusions Our study highlights the importance of careful assessment of the distribution of patient results for each analyte in a PBRTQC program with the optimal approaches dependent on whether the patient result distribution is symmetrical or skewed.


Author(s):  
NIELS CHARLIER ◽  
GUY DE TRÉ ◽  
SIDHARTA GAUTAMA ◽  
RIK BELLENS

In this paper a new method is proposed for automating quality control and actualisation of spatial information using image data. A new model for imperfect geographic information is explored for this purpose: a twofold fuzzy region model; which is an extension of both the fuzzy region model and the egg/yolk model. This model is used to interpret spatial classifications and its imperfections in a new way. By defining different operators on the model an imprecise quality report can be developped for geographic databases that uses the imperfect spatial classification as reference information. The model makes it possible, despite the large amount of imperfection in spatial classifications, to use them for rather accurate error detection on geographic databases.


Author(s):  
Pradeep Reddy Raamana ◽  
Athena Theyers ◽  
Tharushan Selliah ◽  
Piali Bhati ◽  
Stephen R. Arnott ◽  
...  

AbstractQuality control of morphometric neuroimaging data is essential to improve reproducibility. Owing to the complexity of neuroimaging data and subsequently the interpretation of their results, visual inspection by trained raters is the most reliable way to perform quality control. Here, we present a protocol for visual quality control of the anatomical accuracy of FreeSurfer parcellations, based on an easy to use open source tool called VisualQC. We comprehensively evaluate its utility in terms of error detection rate and inter-rater reliability on two large multi-site datasets, and discuss site differences in error patterns. This evaluation shows that VisualQC is a practically viable protocol for community adoption.


2018 ◽  
Vol 23 ◽  
pp. 00028 ◽  
Author(s):  
Irena Otop ◽  
Jan Szturc ◽  
Katarzyna Ośródka ◽  
Piotr Djaków

The automatic procedure of real-time quality control (QC) of telemetric rain gauge measurements (G) has been developed to produce quantitative precipitation estimates mainly for the needs of operational hydrology. The developed QC procedure consists of several tests: gross error detection, a range check, a spatial consistency check, a temporal consistency check, and a radar and satellite conformity check. The output of the procedure applied in real-time is quality index QI(G) that quantitatively characterised quality of each individual measurement. The QC procedure has been implemented into operational work at the Institute of Meteorology and Water Management since 2016. However, some elements of the procedure are still under development and can be improved based on the results and experience collected after about two years of real-time work on network of telemetric rain gauges


1985 ◽  
Vol 31 (2) ◽  
pp. 206-212 ◽  
Author(s):  
A S Blum

Abstract I describe a program for definitive comparison of different quality-control statistical procedures. A microcomputer simulates quality-control results generated by repetitive analytical runs. It applies various statistical rules to each result, tabulating rule breaks to evaluate rules as routinely applied by the analyst. The process repeats with increasing amounts of random and systematic error. Rate of false rejection and true error detection for currently popular statistical procedures were comparatively evaluated together with a new multirule procedure described here. The nature of the analyst's response to out-of-control signals was also evaluated. A single-rule protocol that is as effective as the multirule protocol of Westgard et al. (Clin Chem 27:493, 1981) is reported.


Sign in / Sign up

Export Citation Format

Share Document