Quality control failures exceeding the weekly limit (QC FEWL): a simple tool to improve assay error detection

Author(s):  
Eric S Kilpatrick

Background Even when a laboratory analyte testing process is in control, routine quality control testing will fail with a frequency that can be predicted by the number of quality control levels used, the run frequency and the control rule employed. We explored whether simply counting the number of assay quality control run failures during a running week, and then objectively determining if there was an excess, could complement daily quality control processes in identifying an out-of-control assay. Methods Binomial statistics were used to determine the threshold number of quality control run failures in any rolling week which would statistically exceed that expected for a particular test. Power function graphs were used to establish error detection (Ped) and false rejection rates compared with popular control rules. Results Identifying quality control failures exceeding the weekly limit (QC FEWL) is a more powerful means of detecting smaller systematic (bias) errors than traditional daily control rules (12s, 13s or 13s/22s/R4s) and markedly superior in detecting smaller random (imprecision) errors while maintaining false identification rates below 2%. Error detection rates also exceeded those using a within- and between-run Westgard multirule (13s/22s/41s/10x). Conclusions Daily review of tests shown to statistically exceed their rolling week limit of expected quality control run failures is more powerful than traditional quality control tools at identifying potential systematic and random test errors and so offers a supplement to daily quality control practices that has no requirement for complex data extraction or manipulation.

1992 ◽  
Vol 38 (3) ◽  
pp. 364-369 ◽  
Author(s):  
C A Parvin

Abstract This paper continues an investigation into the merits of an alternative approach to the statistical evaluation of quality-control rules. In this report, computer simulation is used to evaluate and compare quality-control rules designed to detect increases in within-run or between-run imprecision. When out-of-control conditions are evaluated in terms of their impact on total analytical imprecision, the error detection ability of a rule depends on the relative magnitudes of the between-run and within-run error components under stable operating conditions. A recently proposed rule based on the F-test, designed to detect increases in between-run imprecision, is shown to have relatively poor performance characteristics. Additionally, several issues are examined that have been difficult to address with the traditional evaluation approach.


1977 ◽  
Vol 23 (10) ◽  
pp. 1857-1867 ◽  
Author(s):  
J O Westgard ◽  
T Groth ◽  
T Aronsson ◽  
H Falk ◽  
C H de Verdier

Abstract When assessing the performance of an internal quality control system, it is useful to determine the probability for false rejections (pfr) and the probability for error detection (ped). These performance characteristics are estimated here by use of a computer stimulation procedure. The control rules studied include those commonly employed with Shewhart-type control charts, a cumulative sum rule, and rules applicable when a series of control measurements are treated as a single control observation. The error situations studied include an increase in random error, a systematic shift, a systematic drift, and mixtures of these. The probability for error detection is very dependent on the number of control observations and the choice of control rules. No one rule is best for detecting all errors, thus combinations of rules are desirable. Some appropriate combinations are suggested and their performance characteristics are presented.


1992 ◽  
Vol 38 (3) ◽  
pp. 358-363 ◽  
Author(s):  
C A Parvin

Abstract A simulation approach that allows direct estimation of the power of a quality-control rule to detect error that persists until detection is used to compare and evaluate the error detection capabilities of a group of quality-control rules. Two persistent error situations are considered: a constant shift and a linear trend in the quality-control mean. A recently proposed "moving slope" quality-control test for the detection of linear trends is shown to have poor error detection characteristics. A multimean quality-control rule is introduced to illustrate the strategy underlying multirule procedures, which is to increase power without sacrificing response rate. This strategy is shown to provide superior error detection capability when compared with other rules evaluated under both error situations.


1991 ◽  
Vol 37 (10) ◽  
pp. 1720-1724 ◽  
Author(s):  
C A Parvin

Abstract The concepts of the power function for a quality-control rule, the error detection rate, and the false rejection rate were major advances in evaluating the performance characteristics of quality-control procedures. Most early articles published in this area evaluated the performance characteristics of quality-control rules with the assumption that an intermittent error condition occurred only within the current run, as opposed to a persistent error that continued until detection. Difficulties occur when current simulation methods are applied to the persistent error case. Here, I examine these difficulties and propose an alternative method that handles persistent error conditions effectively when evaluating and quantifying the performance characteristics of a quality-control rule.


1979 ◽  
Vol 25 (3) ◽  
pp. 394-400 ◽  
Author(s):  
J O Westgard ◽  
H Falk ◽  
T Groth

Abstract A computer-stimulation study has been performed to determine how the performance characteristics of quality-control rules are affected by the presence of a between-run component of variation, the choice of control limits (calculated from within-run vs. total standard deviations), and the shape of the error distribution. When a between-run standard deviation (Sb) exists and control limits are calculated from the total standard deviation (St, which includes Sb as well as the within-run standard deviation, Sw), there is generally a loss in ability to detect analytical disturbances or errors. With control limits calculated from Sw, there is generally an increase in the level of false rejections. The presence of non-gaussian error distribution appears to have considerably less effect. It can be recommended that random error be controlled by use of a chi-square or range-control rule, with control limits calculated from Sw. Optimal control of systematic errors is difficult when Sb exists. An effort should be made to reduce Sb, and this will lead to increased ability to detect analytical errors. When Sb is tolerated or accepted as part of the baseline state of operation for the analytical method, then further increases in the number of control observations will be necessary to achieve a given probability for error detection.


2018 ◽  
Vol 151 (4) ◽  
pp. 364-370 ◽  
Author(s):  
James O Westgard ◽  
Sten A Westgard

AbstractObjectivesTo establish an objective, scientific, evidence-based process for planning statistical quality control (SQC) procedures based on quality required for a test, precision and bias observed for a measurement procedure, probabilities of error detection and false rejection for different control rules and numbers of control measurements, and frequency of QC events (or run size) to minimize patient risk.MethodsA Sigma-Metric Run Size Nomogram and Power Function Graphs have been used to guide the selection of control rules, numbers of control measurements, and frequency of QC events (or patient run size).ResultsA tabular summary is provided by a Sigma-Metric Run Size Matrix, with a graphical summary of Westgard Sigma Rules with Run Sizes.ConclusionMedical laboratories can plan evidence-based SQC practices using simple tools that relate the Sigma-Metric of a testing process to the control rules, number of control measurements, and run size (or frequency of QC events).


1993 ◽  
Vol 39 (3) ◽  
pp. 440-447 ◽  
Author(s):  
C A Parvin

Abstract The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.


2013 ◽  
Vol 141 (2) ◽  
pp. 798-808 ◽  
Author(s):  
Zhifang Xu ◽  
Yi Wang ◽  
Guangzhou Fan

Abstract The relatively smooth terrain embedded in the numerical model creates an elevation difference against the actual terrain, which in turn makes the quality control of 2-m temperature difficult when forecast or analysis fields are utilized in the process. In this paper, a two-stage quality control method is proposed to address the quality control of 2-m temperature, using biweight means and a progressive EOF analysis. The study is made to improve the quality control of the observed 2-m temperature collected by China and its neighboring areas, based on the 6-h T639 analysis from December 2009 to February 2010. Results show that the proposed two-stage quality control method can secure the needed quality control better, compared with a regular EOF quality control process. The new method is, in particular, able to remove the data that are dotted with consecutive errors but showing small fluctuations. Meanwhile, compared with the lapse rate of temperature method, the biweight mean method is able to remove the systematic bias generated by the model. It turns out that such methods make the distributions of observation increments (the difference between observation and background) more Gaussian-like, which ensures the data quality after the quality control.


Genetics ◽  
2003 ◽  
Vol 164 (3) ◽  
pp. 1161-1173
Author(s):  
Guohua Zou ◽  
Deyun Pan ◽  
Hongyu Zhao

Abstract The identification of genotyping errors is an important issue in mapping complex disease genes. Although it is common practice to genotype multiple markers in a candidate region in genetic studies, the potential benefit of jointly analyzing multiple markers to detect genotyping errors has not been investigated. In this article, we discuss genotyping error detections for a set of tightly linked markers in nuclear families, and the objective is to identify families likely to have genotyping errors at one or more markers. We make use of the fact that recombination is a very unlikely event among these markers. We first show that, with family trios, no extra information can be gained by jointly analyzing markers if no phase information is available, and error detection rates are usually low if Mendelian consistency is used as the only standard for checking errors. However, for nuclear families with more than one child, error detection rates can be greatly increased with the consideration of more markers. Error detection rates also increase with the number of children in each family. Because families displaying Mendelian consistency may still have genotyping errors, we calculate the probability that a family displaying Mendelian consistency has correct genotypes. These probabilities can help identify families that, although showing Mendelian consistency, may have genotyping errors. In addition, we examine the benefit of available haplotype frequencies in the general population on genotyping error detections. We show that both error detection rates and the probability that an observed family displaying Mendelian consistency has correct genotypes can be greatly increased when such additional information is available.


Sign in / Sign up

Export Citation Format

Share Document