Performance characteristics of rules for internal quality control: probabilities for false rejection and error detection.

1977 ◽  
Vol 23 (10) ◽  
pp. 1857-1867 ◽  
Author(s):  
J O Westgard ◽  
T Groth ◽  
T Aronsson ◽  
H Falk ◽  
C H de Verdier

Abstract When assessing the performance of an internal quality control system, it is useful to determine the probability for false rejections (pfr) and the probability for error detection (ped). These performance characteristics are estimated here by use of a computer stimulation procedure. The control rules studied include those commonly employed with Shewhart-type control charts, a cumulative sum rule, and rules applicable when a series of control measurements are treated as a single control observation. The error situations studied include an increase in random error, a systematic shift, a systematic drift, and mixtures of these. The probability for error detection is very dependent on the number of control observations and the choice of control rules. No one rule is best for detecting all errors, thus combinations of rules are desirable. Some appropriate combinations are suggested and their performance characteristics are presented.

1991 ◽  
Vol 37 (10) ◽  
pp. 1720-1724 ◽  
Author(s):  
C A Parvin

Abstract The concepts of the power function for a quality-control rule, the error detection rate, and the false rejection rate were major advances in evaluating the performance characteristics of quality-control procedures. Most early articles published in this area evaluated the performance characteristics of quality-control rules with the assumption that an intermittent error condition occurred only within the current run, as opposed to a persistent error that continued until detection. Difficulties occur when current simulation methods are applied to the persistent error case. Here, I examine these difficulties and propose an alternative method that handles persistent error conditions effectively when evaluating and quantifying the performance characteristics of a quality-control rule.


1979 ◽  
Vol 25 (3) ◽  
pp. 394-400 ◽  
Author(s):  
J O Westgard ◽  
H Falk ◽  
T Groth

Abstract A computer-stimulation study has been performed to determine how the performance characteristics of quality-control rules are affected by the presence of a between-run component of variation, the choice of control limits (calculated from within-run vs. total standard deviations), and the shape of the error distribution. When a between-run standard deviation (Sb) exists and control limits are calculated from the total standard deviation (St, which includes Sb as well as the within-run standard deviation, Sw), there is generally a loss in ability to detect analytical disturbances or errors. With control limits calculated from Sw, there is generally an increase in the level of false rejections. The presence of non-gaussian error distribution appears to have considerably less effect. It can be recommended that random error be controlled by use of a chi-square or range-control rule, with control limits calculated from Sw. Optimal control of systematic errors is difficult when Sb exists. An effort should be made to reduce Sb, and this will lead to increased ability to detect analytical errors. When Sb is tolerated or accepted as part of the baseline state of operation for the analytical method, then further increases in the number of control observations will be necessary to achieve a given probability for error detection.


2013 ◽  
Vol 66 (12) ◽  
pp. 1027-1032 ◽  
Author(s):  
Helen Kinns ◽  
Sarah Pitkin ◽  
David Housley ◽  
Danielle B Freedman

There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.


1989 ◽  
Vol 35 (7) ◽  
pp. 1416-1422 ◽  
Author(s):  
K Linnet

Abstract Design of control charts for the mean, the within-run component of variance, and the ratio of between-run to within-run components of variance is outlined. The between-run component of variation is the main source of imprecision for analytes determined by an enzymo- or radioimmunoassay principle; accordingly, explicit control of this component is especially relevant for these types of analytes. Power curves for typical situations are presented. I also show that a between-run component of variation puts an upper limit on the achievable power towards systematic errors. Therefore, when the between-run component of variation exceeds the within-run component, use of no more than about four controls per run is reasonable at a given concentration.


1996 ◽  
Vol 42 (4) ◽  
pp. 593-597 ◽  
Author(s):  
W A Sadler ◽  
L M Murray ◽  
J G Turner

Abstract A relatively slow transition has occurred from so-called 1st-generation thyrotropin (TSH) assays (e.g., RIAs) through 2nd-generation assays (e.g., IRMAs) to 3rd-generation assays (e.g., immunochemiluminometric assays). Analysis of data from a modified internal quality-control design, followed up by a computer simulation, showed that specimen carryover has minimal effect on 2nd-generation TSH assays. However, extension of the simulation to a 3rd-generation assay showed the possibility of substantial effects in the subnormal region. Carryover of 1:1250 (0.08%), for example, may reduce the theoretical 10-fold precision improvement claimed for 3rd-generation assays to nearer fourfold. Simulation results suggest maximum allowable specimen carryover of approximately 1:10,000 (approximately 0.01%) for 3rd-generation TSH assays. We suggest that when automated specimen handling is used in a TSH assay, a well-designed carryover experiment should become a routine part of reports that claim 3rd-generation (or better) performance characteristics.


Author(s):  
James O Westgard

The first essential in setting up internal quality control (IQC) of a test procedure in the clinical laboratory is to select the proper IQC procedure to implement, i.e. choosing the statistical criteria or control rules, and the number of control measurements, according to the quality required for the test and the observed performance of the method. Then the right IQC procedure must be properly implemented. This review focuses on strategies for planning and implementing IQC procedures in order to improve the quality of the IQC. A quantitative planning process is described that can be implemented with graphical tools such as power function or critical-error graphs and charts of operating specifications. Finally, a total QC strategy is formulated to minimize cost and maximize quality. A general strategy for IQC implementation is recommended that employs a three-stage design in which the first stage provides high error detection, the second stage low false rejection and the third stage prescribes the length of the analytical run, making use of an algorithm involving the average of normal patients' data.


Sign in / Sign up

Export Citation Format

Share Document