Matrix effects and the performance and selection of quality-control procedures to monitor PO2 measurements

1996 ◽  
Vol 42 (3) ◽  
pp. 392-396 ◽  
Author(s):  
E Olafsdottir ◽  
J O Westgard ◽  
S S Ehrmeyer ◽  
K D Fallon

Abstract We have assessed how variation in the matrix of control materials would affect error detection and false-rejection characteristics of quality-control (QC) procedures used to monitor PO2 in blood gas measurements. To determine the expected QC performance, we generated power curves for S(mat)/S(meas) ratios of 0.0-4.0. These curves were used to estimate the probabilities of rejecting analytical runs having medically important errors, calculated from the quality required by the CLIA '88 proficiency testing criterion and the precision and accuracy expected for a typical analytical system. When S(mat)/S(meas) ratios are low, the effects of matrix on QC performance are not serious, permitting selections of QC procedures based on simple power curves for a single component of variation. As S(mat)/S(meas) ratios increase, single-rule procedures generally show a loss in error detection, whereas multirule procedures, including the 3(1)s control rule, show an increase in false rejections. An optimized QC design is presented.

1990 ◽  
Vol 36 (2) ◽  
pp. 230-233 ◽  
Author(s):  
D D Koch ◽  
J J Oryall ◽  
E F Quam ◽  
D H Feldbruegge ◽  
D E Dowd ◽  
...  

Abstract Quality-control (QC) procedures (i.e., decision rules used, numbers of control measurements collected per run) have been selected for individual tests of a multitest analyzer, to see that clinical or "medical usefulness" requirements for quality are met. The approach for designing appropriate QC procedures includes the following steps: (a) defining requirements for quality in the form of the "total allowable analytical error" for each test, (b) determining the imprecision of each measurement procedure, (c) calculating the medically important systematic and random errors for each test, and (d) assessing the probabilities for error detection and false rejection for candidate control procedures. In applying this approach to the Hitachi 737 analyzer, a design objective of 90% (or greater) detection of systematic errors was met for most tests (sodium, potassium, glucose, urea nitrogen, creatinine, phosphorus, uric acid, cholesterol, total protein, total bilirubin, gamma-glutamyltransferase, alkaline phosphatase, aspartate aminotransferase, lactate dehydrogenase) by use of 3.5s control limits with two control measurements per run (N). For the remaining tests (albumin, chloride, total CO2, calcium), requirements for QC procedures were more stringent, and 2.5s limits (with N = 2) were selected.


1976 ◽  
Vol 22 (8) ◽  
pp. 1399-1401 ◽  
Author(s):  
Z L Komjathy ◽  
J C Mathies ◽  
J A Parker ◽  
H A Schreiber

Abstract We describe an evaluation of the in-use stability and short-term precision of a three-level ampuled quality-control system for monitoring pH, pCO2, and pO2 measurements on clinical blood-gas analyzers. In three hospital laboratories, 324 such ampuls were opened and allowed to stand with their contents exposed to atmospheric conditions for accurately timed intervals up to 240 s. Contents were then analyzed for pH, pCO2, and pO2. Student's t-test was used to evaluate the significance of differences observed in recoveries after time exposure. At a signifcance level of P less than or equal to 0.05, the only significant changes observed throughout the first minute of exposure were average pO2 increases of 180 Pa (1.4 mmHg) (+ 1.4%) and 230 Pa (1.7 mmHg) (+ 2.9%) at levels of 13.4 and 7.7 kPa kPa (101 and 58 mmHg), respectively. The ampuled system was found to be stable precise convenient, and suitable for use in the routine laboratory.


1992 ◽  
Vol 38 (2) ◽  
pp. 204-210 ◽  
Author(s):  
Aristides T Hatjimihail

Abstract I have developed an interactive microcomputer simulation program for the design, comparison, and evaluation of alternative quality-control (QC) procedures. The program estimates the probabilities for rejection under different conditions of random and systematic error when these procedures are used and plots their power function graphs. It also estimates the probabilities for detection of critical errors, the defect rate, and the test yield. To allow a flexible definition of the QC procedures, it includes an interpreter. Various characteristics of the analytical process and the QC procedure can be user-defined. The program extends the concepts of the probability for error detection and of the power function to describe the results of the introduction of error between runs and within a run. The usefulness of this approach is illustrated with some examples.


Author(s):  
Z. Lari ◽  
K. Al-Durgham ◽  
A. Habib

Over the past few years, laser scanning systems have been acknowledged as the leading tools for the collection of high density 3D point cloud over physical surfaces for many different applications. However, no interpretation and scene classification is performed during the acquisition of these datasets. Consequently, the collected data must be processed to extract the required information. The segmentation procedure is usually considered as the fundamental step in information extraction from laser scanning data. So far, various approaches have been developed for the segmentation of 3D laser scanning data. However, none of them is exempted from possible anomalies due to disregarding the internal characteristics of laser scanning data, improper selection of the segmentation thresholds, or other problems during the segmentation procedure. Therefore, quality control procedures are required to evaluate the segmentation outcome and report the frequency of instances of expected problems. A few quality control techniques have been proposed for the evaluation of laser scanning segmentation. These approaches usually require reference data and user intervention for the assessment of segmentation results. In order to resolve these problems, a new quality control procedure is introduced in this paper. This procedure makes hypotheses regarding potential problems that might take place in the segmentation process, detects instances of such problems, quantifies the frequency of these problems, and suggests possible actions to remedy them. The feasibility of the proposed approach is verified through quantitative evaluation of planar and linear/cylindrical segmentation outcome from two recently-developed parameter-domain and spatial-domain segmentation techniques.


1981 ◽  
Vol 27 (9) ◽  
pp. 1536-1545 ◽  
Author(s):  
J O Westgard ◽  
T Groth

Abstract A computer simulation program has been developed to aid in designing and evaluating statistical control procedures. This "QC stimulator" (quality control) program permits the user to study the effects of different factors on the performance of quality-control procedures. These factors may be properties of the analytical procedure, characteristics of the instrument system, or conditions for the quality-control procedure. The performance of a control procedure is characterized by its probability for rejection, as estimated at several different magnitudes of random and systematic error. These performance characteristics are presented graphically by power functions-plots of the probability for rejection vs the size of the analytical errors. The utility of this stimulation tool is illustrated by application to multi-rule single-value procedures, mean and range procedures, and a trend analysis procedure. Careful choice of control rules is necessary to minimize false rejections and to optimize error detection with multi-rule procedures. Control limits must be carefully calculated for optimum performance of mean and range procedures. The level of significance for testing control must be carefully selected for the trend analysis procedure.


2013 ◽  
Vol 66 (12) ◽  
pp. 1027-1032 ◽  
Author(s):  
Helen Kinns ◽  
Sarah Pitkin ◽  
David Housley ◽  
Danielle B Freedman

There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.


2017 ◽  
Author(s):  
Aristeidis T. Chatzimichail

This doctoral thesis describes a series of tools that have been developed for the design, evaluation, and selection of optimal quality control procedures, in a clinical chemistry laboratory setting. These tools include: 1) A simulation program for the design, evaluation, and comparison of alternative quality control procedures. The program allows (a) the definition of a very large number of quality control rules, and (b) the definition of the quality control procedures as boolean propositions of any degree of complexity. The program elucidates the ways the error is introduced into the measurements and describes the respective methods of simulation. Therefore, it allows the study of the performance of the quality control procedures when (a) there is error in all the measurements, (b) the error is introduced between two consecutive analytical runs, and (c) the error is introduced within an analytical run, between two consecutive control samples. 2) A library of fifty alternative quality control procedures. 3) A library of the power function graphs of these procedures. 4) A program for the selection of the optimal quality control procedure of the library, given an analytical process. As optimal quality control procedure is considered the procedure that detects the critical errors with stated probabilities and the minimum probability for false rejection. A new general system of equations is proposed for the calculation of the critical errors.


1991 ◽  
Vol 37 (10) ◽  
pp. 1720-1724 ◽  
Author(s):  
C A Parvin

Abstract The concepts of the power function for a quality-control rule, the error detection rate, and the false rejection rate were major advances in evaluating the performance characteristics of quality-control procedures. Most early articles published in this area evaluated the performance characteristics of quality-control rules with the assumption that an intermittent error condition occurred only within the current run, as opposed to a persistent error that continued until detection. Difficulties occur when current simulation methods are applied to the persistent error case. Here, I examine these difficulties and propose an alternative method that handles persistent error conditions effectively when evaluating and quantifying the performance characteristics of a quality-control rule.


Sign in / Sign up

Export Citation Format

Share Document