scholarly journals Patient-based quality control for glucometers using the moving sum of positive patient results and moving average

2020 ◽  
Vol 30 (2) ◽  
pp. 296-306
Author(s):  
Chun Yee Lim ◽  
Tony Badrick ◽  
Tze Ping Loh

Introduction: The capability of glucometer internal quality control (QC) in detecting varying magnitude of systematic error (bias), and the potential use of moving sum of positive results (MovSum) and moving average (MA) techniques as potential alternatives were evaluated. Materials and methods: The probability of error detection using routine QC and manufacturer’s control limits were investigated using historical data. Moving sum of positive results and MA algorithms were developed and optimized before being evaluated through numerical simulation for false positive rate and probability of error detection. Results: When the manufacturer’s default control limits (that are multiple times higher than the running standard deviation (SD) of the glucometer) was used, they had 0-75% probability of detecting small errors up to 0.8 mmol/L. However, the error detection capability improved to 20-100% when the running SD of the glucometer was used. At a binarization threshold of 6.2 mmol/L and block sizes of 200 to 400, MovSum has a 100% probability of detecting a bias that is greater than 0.5 mmol/L. Compared to MovSum, the MA technique had lower probability of bias detection, especially for smaller bias magnitudes; MA also had higher false positive rates. Conclusions: The MovSum technique is suited for detecting small, but clinically significant biases. Point of care QC should follow conventional practice by setting the control limits according to the running mean and SD to allow proper error detection. The glucometer manufacturers have an active role to play in liberalizing QC settings and also enhancing the middleware to facility patient-based QC practices.

Author(s):  
Jiakai Liu ◽  
Chin Hon Tan ◽  
Tony Badrick ◽  
Tze Ping Loh

AbstractBackground:Recently, the total prostate-specific antigen (PSA) assay used in a laboratory had a positive bias of 0.03 μg/L, which went undetected. Consequently, a number of post-prostatectomy patients with previously undetectable PSA concentrations (defined as <0.03 μg/L in that laboratory) were being reported as having detectable PSA, which suggested poorer prognosis according to clinical guidelines.Methods:Through numerical simulations, we explored (1) how a small bias may evade the detection of routine quality control (QC) procedures with specific reference to the concentration of the QC material, (2) whether the use of ‘average of normals’ approach may detect such a small bias, and (3) describe the use of moving sum of number of patient results with detectable PSA as an adjunct QC procedure.Results:The lowest QC level (0.86 μg/L) available from a commercial kit had poor probability (<10%) of a bias of 0.03 μg/L regardless of QC rule (i.e. 1:2S, 2:2S, 1:3S, 4:1S) used. The average number of patient results affected before error detection (ANPed) was high when using the average of normals approach due to the relatively wide control limits. By contrast, the ANPed was significantly lower for the moving sum of number of patient results with a detectable PSA approach.Conclusions:Laboratory practitioners should ensure their QC strategy can detect small but critical bias, and may require supplementation of ultra-low QC levels that are not covered by commercial kits with in-house preparations. The use of moving sum of number of patient results with a detectable result is a helpful adjunct QC tool.


1979 ◽  
Vol 25 (3) ◽  
pp. 394-400 ◽  
Author(s):  
J O Westgard ◽  
H Falk ◽  
T Groth

Abstract A computer-stimulation study has been performed to determine how the performance characteristics of quality-control rules are affected by the presence of a between-run component of variation, the choice of control limits (calculated from within-run vs. total standard deviations), and the shape of the error distribution. When a between-run standard deviation (Sb) exists and control limits are calculated from the total standard deviation (St, which includes Sb as well as the within-run standard deviation, Sw), there is generally a loss in ability to detect analytical disturbances or errors. With control limits calculated from Sw, there is generally an increase in the level of false rejections. The presence of non-gaussian error distribution appears to have considerably less effect. It can be recommended that random error be controlled by use of a chi-square or range-control rule, with control limits calculated from Sw. Optimal control of systematic errors is difficult when Sb exists. An effort should be made to reduce Sb, and this will lead to increased ability to detect analytical errors. When Sb is tolerated or accepted as part of the baseline state of operation for the analytical method, then further increases in the number of control observations will be necessary to achieve a given probability for error detection.


2019 ◽  
Vol 57 (9) ◽  
pp. 1329-1338 ◽  
Author(s):  
Huub H. van Rossum ◽  
Daan van den Broek

Abstract Background New moving average quality control (MA QC) optimization methods have been developed and are available for laboratories. Having these methods will require a strategy to integrate MA QC and routine internal QC. Methods MA QC was considered only when the performance of the internal QC was limited. A flowchart was applied to determine, per test, whether MA QC should be considered. Next, MA QC was examined using the MA Generator (www.huvaros.com), and optimized MA QC procedures and corresponding MA validation charts were obtained. When a relevant systematic error was detectable within an average daily run, the MA QC was added to the QC plan. For further implementation of MA QC for continuous QC, MA QC management software was configured based on earlier proposed requirements. Also, protocols for the MA QC alarm work-up were designed to allow the detection of temporary assay failure based on previously described experiences. Results Based on the flowchart, 10 chemistry, two immunochemistry and six hematological tests were considered for MA QC. After obtaining optimal MA QC settings and the corresponding MA validation charts, the MA QC of albumin, bicarbonate, calcium, chloride, creatinine, glucose, magnesium, potassium, sodium, total protein, hematocrit, hemoglobin, MCH, MCHC, MCV and platelets were added to the QC plans. Conclusions The presented method allows the design and implementation of QC plans integrating MA QC for continuous QC when internal QC has limited performance.


Author(s):  
Joel D Smith ◽  
Tony Badrick ◽  
Francis Bowling

Background Patient-based real-time quality control (PBRTQC) techniques have been described in clinical chemistry for over 50 years. PBRTQC has a number of advantages over traditional quality control including commutability, cost and the opportunity for real-time monitoring. However, there are few systematic investigations assessing how different PBRTQC techniques perform head-to-head. Methods In this study, we compare moving averages with and without truncation and moving medians. For analytes with skewed distributions such as alanine aminotransferase and creatinine, we also investigate the effect of Box–Cox transformation of the data. We assess the ability of each technique to detect simulated analytical bias in real patient data for multiple analytes and to retrospectively detect a real analytical shift in a creatinine and urea assay. Results For analytes with symmetrical distributions, we show that error detection is similar for a moving average with and without four standard deviation truncation limits and for a moving median. In contrast to analytes with symmetrically distributed results, moving averages perform poorly for right skewed distributions such as alanine aminotransferase and creatinine and function only with a tight upper truncation limit. Box–Cox transformation of the data both improves the performance of moving averages and allows all data points to be used. This was also confirmed for retrospective detection of a real analytical shift in creatinine and urea. Conclusions Our study highlights the importance of careful assessment of the distribution of patient results for each analyte in a PBRTQC program with the optimal approaches dependent on whether the patient result distribution is symmetrical or skewed.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Hikmet Can Çubukçu

Abstract Objectives The present study set out to build a machine learning model to incorporate conventional quality control (QC) rules, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) with random forest (RF) algorithm to achieve better performance and to evaluate the performances the models using computer simulation to aid laboratory professionals in QC procedure planning. Methods Conventional QC rules, EWMA, CUSUM, and RF models were implemented on the simulation data using an in-house algorithm. The models’ performances were evaluated on 170,000 simulated QC results using outcome metrics, including the probability of error detection (Ped), probability of false rejection (Pfr), average run length (ARL), and power graph. Results The highest Pfr (0.0404) belonged to the 1–2s rule. The 1–3s rule could not detect errors with a 0.9 Ped up to 4 SD of systematic error. The random forest model had the highest Ped for systematic errors lower than 1 SD. However, ARLs of the model require the combined utility of the RF model with conventional QC rules having lower ARLs or more than one QC measurement is required. Conclusions The RF model presented in this study showed acceptable Ped for most degrees of systematic error. The outcome metrics established in this study will help laboratory professionals planning internal QC.


2013 ◽  
Vol 66 (12) ◽  
pp. 1027-1032 ◽  
Author(s):  
Helen Kinns ◽  
Sarah Pitkin ◽  
David Housley ◽  
Danielle B Freedman

There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.


1977 ◽  
Vol 23 (10) ◽  
pp. 1857-1867 ◽  
Author(s):  
J O Westgard ◽  
T Groth ◽  
T Aronsson ◽  
H Falk ◽  
C H de Verdier

Abstract When assessing the performance of an internal quality control system, it is useful to determine the probability for false rejections (pfr) and the probability for error detection (ped). These performance characteristics are estimated here by use of a computer stimulation procedure. The control rules studied include those commonly employed with Shewhart-type control charts, a cumulative sum rule, and rules applicable when a series of control measurements are treated as a single control observation. The error situations studied include an increase in random error, a systematic shift, a systematic drift, and mixtures of these. The probability for error detection is very dependent on the number of control observations and the choice of control rules. No one rule is best for detecting all errors, thus combinations of rules are desirable. Some appropriate combinations are suggested and their performance characteristics are presented.


2009 ◽  
Vol 102 (09) ◽  
pp. 593-600 ◽  
Author(s):  
Per Petersen ◽  
Una Sølvik ◽  
Sverre Sandberg ◽  
Anne Stavelin

SummaryMany primary care laboratories use point-of-care (POC) instruments to monitor patients on anticoagulant treatment. The internal analytical quality control of these instruments is often assessed by analysing lyophilised control materials and/or by sending patient samples to a local hospital laboratory for comparison (split sample).The aim of this study was to evaluate the utility of these two models of prothrombin time quality control. The models were evaluated by power functions created by computer simulations based on empirical data from 18 primary care laboratories using the POC instruments Thrombotrack, CoaguChek S, or Hemochron Jr. Signature. The control rules 12S, 13S, exponential weighted moving average, and the deviation limits of ± 10% and ± 20% were evaluated by their probability of error detection and false rejections. The total within-lab coefficient of variation was 3.8% and 6.9% for Thrombotrack, 8.9% and 10.5% for CoaguChek S, and 9.4% and 14.8% for Hemochron Jr. Signature for the control sample measurements and the split sample measurements, respectively. The probability of error detection was higher using a lyophilised control material than a patient split sample for all three instruments, whereas the probability of false rejection was similar. A higher probability of error detection occurred when lyophilised control material was used compared with the patient split samples; therefore, lyophilised control material should be used for internal analytical quality control of prothrombin time in primary health care.


2019 ◽  
Vol 7 (2) ◽  
pp. 157-160
Author(s):  
Kalpesh S Tailor

An SQC chart is a graphical tool for representation of the data for knowing the extent of variations from the expected standard. This technique was first suggested by W.A. Shewhart of Bell Telephone Company based on 3σ limits. M. Harry, the engineer of Motorola has introduced the concept of six sigma in 1980. In 6σ limits, it is presumed to attain 3.4 or less number of defects per million of opportunities. Naik V.D and Desai J.M proposed an alternative of normal distribution, which is named as moderate distribution. The parameters of this distribution are mean and mean deviation. Naik V.D and Tailor K.S. have suggested the concept of 3-delta control limits and developed various control charts based on this distribution. Using these concepts, control limits based on 6-delta is suggested in this paper. Also the moving average chart is studied by using 6-delta methodology. A ready available table for mean deviation is prepared for the quality control experts for taking fast actions. 


Sign in / Sign up

Export Citation Format

Share Document