Comparing the Power of Quality-Control Rules to Detect Persistent Systematic Error

1992 ◽  
Vol 38 (3) ◽  
pp. 358-363 ◽  
Author(s):  
C A Parvin

Abstract A simulation approach that allows direct estimation of the power of a quality-control rule to detect error that persists until detection is used to compare and evaluate the error detection capabilities of a group of quality-control rules. Two persistent error situations are considered: a constant shift and a linear trend in the quality-control mean. A recently proposed "moving slope" quality-control test for the detection of linear trends is shown to have poor error detection characteristics. A multimean quality-control rule is introduced to illustrate the strategy underlying multirule procedures, which is to increase power without sacrificing response rate. This strategy is shown to provide superior error detection capability when compared with other rules evaluated under both error situations.

1991 ◽  
Vol 37 (10) ◽  
pp. 1720-1724 ◽  
Author(s):  
C A Parvin

Abstract The concepts of the power function for a quality-control rule, the error detection rate, and the false rejection rate were major advances in evaluating the performance characteristics of quality-control procedures. Most early articles published in this area evaluated the performance characteristics of quality-control rules with the assumption that an intermittent error condition occurred only within the current run, as opposed to a persistent error that continued until detection. Difficulties occur when current simulation methods are applied to the persistent error case. Here, I examine these difficulties and propose an alternative method that handles persistent error conditions effectively when evaluating and quantifying the performance characteristics of a quality-control rule.


1993 ◽  
Vol 39 (3) ◽  
pp. 440-447 ◽  
Author(s):  
C A Parvin

Abstract The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.


1992 ◽  
Vol 38 (3) ◽  
pp. 364-369 ◽  
Author(s):  
C A Parvin

Abstract This paper continues an investigation into the merits of an alternative approach to the statistical evaluation of quality-control rules. In this report, computer simulation is used to evaluate and compare quality-control rules designed to detect increases in within-run or between-run imprecision. When out-of-control conditions are evaluated in terms of their impact on total analytical imprecision, the error detection ability of a rule depends on the relative magnitudes of the between-run and within-run error components under stable operating conditions. A recently proposed rule based on the F-test, designed to detect increases in between-run imprecision, is shown to have relatively poor performance characteristics. Additionally, several issues are examined that have been difficult to address with the traditional evaluation approach.


Author(s):  
Eric S Kilpatrick

Background Even when a laboratory analyte testing process is in control, routine quality control testing will fail with a frequency that can be predicted by the number of quality control levels used, the run frequency and the control rule employed. We explored whether simply counting the number of assay quality control run failures during a running week, and then objectively determining if there was an excess, could complement daily quality control processes in identifying an out-of-control assay. Methods Binomial statistics were used to determine the threshold number of quality control run failures in any rolling week which would statistically exceed that expected for a particular test. Power function graphs were used to establish error detection (Ped) and false rejection rates compared with popular control rules. Results Identifying quality control failures exceeding the weekly limit (QC FEWL) is a more powerful means of detecting smaller systematic (bias) errors than traditional daily control rules (12s, 13s or 13s/22s/R4s) and markedly superior in detecting smaller random (imprecision) errors while maintaining false identification rates below 2%. Error detection rates also exceeded those using a within- and between-run Westgard multirule (13s/22s/41s/10x). Conclusions Daily review of tests shown to statistically exceed their rolling week limit of expected quality control run failures is more powerful than traditional quality control tools at identifying potential systematic and random test errors and so offers a supplement to daily quality control practices that has no requirement for complex data extraction or manipulation.


1977 ◽  
Vol 23 (10) ◽  
pp. 1857-1867 ◽  
Author(s):  
J O Westgard ◽  
T Groth ◽  
T Aronsson ◽  
H Falk ◽  
C H de Verdier

Abstract When assessing the performance of an internal quality control system, it is useful to determine the probability for false rejections (pfr) and the probability for error detection (ped). These performance characteristics are estimated here by use of a computer stimulation procedure. The control rules studied include those commonly employed with Shewhart-type control charts, a cumulative sum rule, and rules applicable when a series of control measurements are treated as a single control observation. The error situations studied include an increase in random error, a systematic shift, a systematic drift, and mixtures of these. The probability for error detection is very dependent on the number of control observations and the choice of control rules. No one rule is best for detecting all errors, thus combinations of rules are desirable. Some appropriate combinations are suggested and their performance characteristics are presented.


1979 ◽  
Vol 25 (3) ◽  
pp. 394-400 ◽  
Author(s):  
J O Westgard ◽  
H Falk ◽  
T Groth

Abstract A computer-stimulation study has been performed to determine how the performance characteristics of quality-control rules are affected by the presence of a between-run component of variation, the choice of control limits (calculated from within-run vs. total standard deviations), and the shape of the error distribution. When a between-run standard deviation (Sb) exists and control limits are calculated from the total standard deviation (St, which includes Sb as well as the within-run standard deviation, Sw), there is generally a loss in ability to detect analytical disturbances or errors. With control limits calculated from Sw, there is generally an increase in the level of false rejections. The presence of non-gaussian error distribution appears to have considerably less effect. It can be recommended that random error be controlled by use of a chi-square or range-control rule, with control limits calculated from Sw. Optimal control of systematic errors is difficult when Sb exists. An effort should be made to reduce Sb, and this will lead to increased ability to detect analytical errors. When Sb is tolerated or accepted as part of the baseline state of operation for the analytical method, then further increases in the number of control observations will be necessary to achieve a given probability for error detection.


2018 ◽  
Vol 151 (4) ◽  
pp. 364-370 ◽  
Author(s):  
James O Westgard ◽  
Sten A Westgard

AbstractObjectivesTo establish an objective, scientific, evidence-based process for planning statistical quality control (SQC) procedures based on quality required for a test, precision and bias observed for a measurement procedure, probabilities of error detection and false rejection for different control rules and numbers of control measurements, and frequency of QC events (or run size) to minimize patient risk.MethodsA Sigma-Metric Run Size Nomogram and Power Function Graphs have been used to guide the selection of control rules, numbers of control measurements, and frequency of QC events (or patient run size).ResultsA tabular summary is provided by a Sigma-Metric Run Size Matrix, with a graphical summary of Westgard Sigma Rules with Run Sizes.ConclusionMedical laboratories can plan evidence-based SQC practices using simple tools that relate the Sigma-Metric of a testing process to the control rules, number of control measurements, and run size (or frequency of QC events).


BMC Genomics ◽  
2012 ◽  
Vol 13 (1) ◽  
pp. 5 ◽  
Author(s):  
Francisco Prosdocimi ◽  
Benjamin Linard ◽  
Pierre Pontarotti ◽  
Olivier Poch ◽  
Julie D Thompson

Nature ◽  
2018 ◽  
Vol 553 (7687) ◽  
pp. 155-155 ◽  
Author(s):  
Steven N. Goodman

Sign in / Sign up

Export Citation Format

Share Document