Comparison of quality-control rules used in clinical chemistry laboratories

1993 ◽  
Vol 39 (8) ◽  
pp. 1638-1649 ◽  
Author(s):  
J Bishop ◽  
A B Nix

Abstract Numerous papers have been written to show which combinations of Shewhart-type quality-control charts are optimal for detecting systematic shifts in the mean response of a process, increases in the random error of a process, and linear drift effects in the mean response across the assay batch. One paper by Westgard et al. (Clin Chem 1977;23:1857-67) especially seems to have attracted the attention of users. Here we derive detailed results that enable the characteristics of the various Shewhart-type control schemes, including the multirule scheme (Clin Chem 1981;27:493-501), to be calculated and show that a fundamental formula proposed by Westgard et al. in the earlier paper is in error, although their derived results are not seriously wrong. We also show that, from a practical point of view, a suitably chosen Cusum scheme is near optimal for all the types and combinations of errors discussed, thereby removing the selection problem for the user.

1988 ◽  
Vol 34 (7) ◽  
pp. 1396-1406 ◽  
Author(s):  
L C Alwan ◽  
M G Bissell

Abstract Autocorrelation of clinical chemistry quality-control (Q/C) measurements causes one of the basic assumptions underlying the use of Levey-Jennings control charts to be violated and performance to be degraded. This is the requirement that the observations be statistically independent. We present a proposal for a new approach to statistical quality control that removes this difficulty. We propose to replace the current single control chart of raw Q/C data with two charts: (a) a common cause chart, representing a Box-Jenkins ARIMA time-series model of any underlying persisting nonrandomness in the process, and (b) a special cause chart of the residuals from the above model, which, being free of such persisting nonrandomness, fulfills the criteria for use of the standard Levey-Jennings plotting format and standard control rules. We provide a comparison of the performance of our proposed approach with that of current practice.


1979 ◽  
Vol 25 (5) ◽  
pp. 675-677 ◽  
Author(s):  
M J Healy

Abstract Results submitted to large-scale quality-control schemes are commonly judged against the mean and standard deviation (SD) of the results from other laboratories. It is desirable to ignore outlying values in estimating this mean and standard deviation, and results more than 2.5 or 3 SD from the mean are commonly rejected. I show that an outlier can so inflate the estimated SD that its presence is not detected by this method. Alternative estimators that are less influenced by outliers are described, and their application to quality-control data is discussed.


2007 ◽  
Vol 26 (3) ◽  
pp. 245-247
Author(s):  
Petros Karkalousos

The Schemes of External Quality Control in Laboratory Medicine in the Balkans There are many differences between the national External Quality Control Schemes all around Europe, but the most important ones are certainly those between the countries of the Balkan region. These differences are due to these countries' different political and financial development, as well as to their tradition and the development of clinical chemistry science in each one. Therefore, there are Balkan countries with very developed EQAS and others where there is no such a scheme. Undoubtedly, the scientific community in these countries wants to develop EQAS despite of the financial and other difficulties.


1972 ◽  
Vol 18 (3) ◽  
pp. 250-257 ◽  
Author(s):  
J H Riddick ◽  
Roger Flora ◽  
Quentin L Van Meter

Abstract A system of quality-control data analysis by computer is described, in which two-way analysis of variance is used for partitioning sources of laboratory error into day-to-day, within-day, betweenpools and additivity variation. The partition for additivity is described in detail as to its advantages and applications. In addition, control charts based on two-way analysis of variance computations are prepared each month by computer. This computer program is designed to operate with the IBM 1800 or 1130 computers or any computer with a Fortran IV compiler. Examples are presented of use of the control charts and of tables of analysis of variance.


2017 ◽  
Vol 32 (1) ◽  
Author(s):  
Azamsadat Iziy ◽  
Bahram Sadeghpour Gildeh ◽  
Ehsan Monabbati

AbstractControl charts have been established as major tools for quality control and improvement in industry. Therefore, it is always required to consider an appropriate design of a control chart from an economical point of view before using the chart. The economic design of a control chart refers to the determination of three optimal control chart parameters: sample size, the sampling interval, and the control limits coefficient. In this article, the double sampling (DS)


After pointing out the importance of the hygrometer, both in a scientific and a practical point of view, the author goes into the question of the advantages and disadvantages attending the use of Daniell’s hygrometer, and the relative merits of this instrument and the dry and wet-bulb thermometers. Although satisfied of the accuracy of Mr. Glaisher’s Tables (founded on the Greenwich Observations), which show at once the relation of the temperature of evaporation to that of the dew-point, he was unwilling to abandon the use of Daniell’s apparatus for that of the wet and dry-bulb thermometers, slight as is the trouble of observing them, without personal experience of the correctness of the tables from which the dew-point was to be deduced. He therefore instituted a series of perfectly comparable observations by the two methods, and in this communication gives the results obtained from them during a period of twenty months. From a comparison of the dew-points determined by the two methods, he concludes that the results show in a striking manner the extreme accuracy of Mr. Glaisher’s Tables, and afford additional testimony to the value of the Greenwich Hygrometrical Observations, and the resulting formula on which those tables are founded. The author then refers to the subject of evaporation, and gives the results of his own observations at Whitehaven during six years, viz. from 1843 to 1848 inclusive. From these he states that the mean annual amount of evaporation is 30·011 inches; and the mean quantity of rain for the same period being 45·255 inches, the depth of the water precipitated exceeds that taken up by evaporation, on the coast in latitude 54½°, by 15·244 inches.


1997 ◽  
Vol 43 (4) ◽  
pp. 594-601 ◽  
Author(s):  
Aljoscha Steffen Neubauer

Abstract A quality-control chart based on exponentially weighted moving averages (EWMA) has, in the past few years, become a popular tool for controlling inaccuracy in industrial quality control. In this paper, I explain the principles of this technique, present some numerical examples, and by computer simulation compare EWMA with other control charts currently used in clinical chemistry. The EWMA chart offers a flexible instrument for visualizing imprecision and inaccuracy and is a good alternative to other charts for detecting inaccuracy, especially where small shifts are of interest. Detection of imprecision with EWMA charts, however, requires special modification.


1989 ◽  
Vol 35 (7) ◽  
pp. 1416-1422 ◽  
Author(s):  
K Linnet

Abstract Design of control charts for the mean, the within-run component of variance, and the ratio of between-run to within-run components of variance is outlined. The between-run component of variation is the main source of imprecision for analytes determined by an enzymo- or radioimmunoassay principle; accordingly, explicit control of this component is especially relevant for these types of analytes. Power curves for typical situations are presented. I also show that a between-run component of variation puts an upper limit on the achievable power towards systematic errors. Therefore, when the between-run component of variation exceeds the within-run component, use of no more than about four controls per run is reasonable at a given concentration.


1977 ◽  
Vol 23 (10) ◽  
pp. 1857-1867 ◽  
Author(s):  
J O Westgard ◽  
T Groth ◽  
T Aronsson ◽  
H Falk ◽  
C H de Verdier

Abstract When assessing the performance of an internal quality control system, it is useful to determine the probability for false rejections (pfr) and the probability for error detection (ped). These performance characteristics are estimated here by use of a computer stimulation procedure. The control rules studied include those commonly employed with Shewhart-type control charts, a cumulative sum rule, and rules applicable when a series of control measurements are treated as a single control observation. The error situations studied include an increase in random error, a systematic shift, a systematic drift, and mixtures of these. The probability for error detection is very dependent on the number of control observations and the choice of control rules. No one rule is best for detecting all errors, thus combinations of rules are desirable. Some appropriate combinations are suggested and their performance characteristics are presented.


2017 ◽  
Vol 63 (8) ◽  
pp. 1377-1387 ◽  
Author(s):  
Andreas Bietenbeck ◽  
Markus A Thaler ◽  
Peter B Luppa ◽  
Frank Klawonn

Abstract BACKGROUND In clinical chemistry, quality control (QC) often relies on measurements of control samples, but limitations, such as a lack of commutability, compromise the ability of such measurements to detect out-of-control situations. Medians of patient results have also been used for QC purposes, but it may be difficult to distinguish changes observed in the patient population from analytical errors. This study aims to combine traditional control measurements and patient medians for facilitating detection of biases. METHODS The software package “rSimLab” was developed to simulate measurements of 5 analytes. Internal QC measurements and patient medians were assessed for detecting impermissible biases. Various control rules combined these parameters. A Westgard-like algorithm was evaluated and new rules that aggregate Z-values of QC parameters were proposed. RESULTS Mathematical approximations estimated the required sample size for calculating meaningful patient medians. The appropriate number was highly dependent on the ratio of the spread of sample values to their center. Instead of applying a threshold to each QC parameter separately like the Westgard algorithm, the proposed aggregation of Z-values averaged these parameters. This behavior was found beneficial, as a bias could affect QC parameters unequally, resulting in differences between their Z-transformed values. In our simulations, control rules tended to outperform the simple QC parameters they combined. The inclusion of patient medians substantially improved bias detection for some analytes. CONCLUSIONS Patient result medians can supplement traditional QC, and aggregations of Z-values are novel and beneficial tools for QC strategies to detect biases.


Sign in / Sign up

Export Citation Format

Share Document