EXPRESS: Application of the sigma metrics to evaluate the analytical performance of cystatin C and design a quality control strategy

Author(s):  
Qian Liu ◽  
Wenjun Zhu ◽  
Guangrong Bian ◽  
Wei Liang ◽  
Changxin Zhao ◽  
...  

Background: Sigma metrics are commonly used to evaluate laboratory management. In this study, we aimed to evaluate the analytical performance of cystatin C using sigma metrics and to develop an individualized quality control scheme for cystatin C levels. Methods: Bias was calculated based on the samples used for the external quality assessment. The coefficient of variation was calculated using 6 months of internal quality control (IQC) measurements at two levels, and desirable specification derived from biological variation was used as the quality goal. The sigma value for cystatin C was calculated using the above data. The IQC scheme and improvement measures were formulated according to the Westgard sigma standards for batch size and quality goal index (QGI). Results: The sigma values for cystatin C, for quality control levels 1 and 2, were 3.04 and 4.95, respectively. The 13s/22s/R4s/41s/8x multi-rules (N=4 or 2 with R=2 or 4), with a batch size of 45 patient samples, were selected as the IQC schemes for cystatin C. With different levels of cystatin C, the power function graph showed a probability for error detection of 94% and 100% and a probability for false rejection of 4% and 2%, respectively. According to the QGI of cystatin C, its precision needs to be improved. Conclusions: With a “desirable” biological variation of 6.50%, the Westgard rule 13s/22s/R4s/41s/8x (N=4 or 2 with R=2 or 4, batch size of 45) with high efficacy for determining the detection error is recommended for individualized quality control schemes of cystatin C.

Author(s):  
Smita Natvarbhai Vasava ◽  
Roshni Gokaldas Sadaria

Introduction: Now-a-days quality is the key aspect of clinical laboratory services. The six sigma metrics is an important quality measurement method for evaluating the performance of the clinical laboratory. Aim: To assess the analytical performance of clinical biochemistry laboratory by utilising thyroid profile and cortisol parameters from Internal Quality Control (IQC) data and to calculate sigma values. Materials and Methods: Study was conducted at Clinical Biochemistry Laboratory, Dhiraj General Hospital, Piparia, Gujarat, India. Retrospectively, IQC data of thyroid profile and cortisol were utilised for six subsequent months (July to December 2019). Coefficient of Variation (CV%) and bias were calculated from IQC data, from that the sigma values were calculated. The sigma values <3, >3 and >6 were indicated by poor performance procedure, good performance and world class performance, respectively. Results: The sigma values were estimated by calculating mean of six months. The mean sigma value of Thyroid Stimulating Hormone (TSH) and Cortisol were >3 for six months which indicated the good performance. However, sigma value of Triiodothyronine (T3), Tetraiodothyronine (T4) were found to be <3 which indicated poor performance. Conclusion: Six sigma methodology applications for thyroid profile and cortisol was evaluated, it was generally found as good. While T3 and T4 parameters showed low sigma values which requires detailed root cause analysis of analytical process. With the help of six sigma methodology, in clinical biochemistry laboratories, an appropriate Quality Control (QC) programming should be done for each parameter. To maintain six sigma levels is challenging to quality management personnel of laboratory, but it will be helpful to improve quality level in the clinical laboratories.


2019 ◽  
Vol 152 (Supplement_1) ◽  
pp. S88-S88
Author(s):  
Jose Jara Aguirre ◽  
Karl Ness ◽  
Alicia Algeciras-Schimnich

Abstract Introduction The CLSI EP15-A3 guideline “User Verification of Precision and Estimation of Bias” provides a simple experimental approach to estimate a method’s imprecision and bias. The objective is to determine if the laboratory precision performance of repeatability (SR) and within-laboratory imprecision (SWL) are in accordance to the manufacturer specification claims (MSCs). Objectives Evaluate the utility of the EP15-A3 protocol to verify method precision during a troubleshooting investigation and after major instrument maintenance, using a carcinoembryonic antigen (CEA) immunoassay as an example. Methods CEA was performed on the Beckman Coulter DxI (Beckman Coulter, Brea, CA). Quality control (QC) levels (L1: 2.89; L2: 21.10; L3: 39.10 ng/mL) (Bio-Rad Laboratories, Irvine, CA) were used. Each QC level was measured before and after instrument maintenance as follows: five replicates per run, one run per day, and during 5 days. Imprecision estimates (IEs) for SR (%CVR) and SWL (%CVWL) were calculated by one-way analysis of variance using Microsoft Excel Analyse-it software. Estimated imprecision was compared to MSC and desirable imprecision specifications based on biological variation (BV). Results A change in the analytical performance of CEA was detected by a decreased sigma-metric indicator. After a bias problem was ruled out, the observed %CVR for L1, L2, and L3 were 7.2%, 3.7%, and 4.8%, respectively. The %CVWL were 8.3%, 5.0%, and 5.5%, which exceeded the MSC of %CVWL~4.0% to 4.5%. After a laboratory investigation, major instrument maintenance was performed by the manufacturer. The %CVR and %CVWL estimates for L1, L2, and L3 after maintenance were 3.2%, 3.8%, 3.5% and 3.9%, 4.2%, 4.0%, respectively. After maintenance, the CEA performance was consistent with the MSC for each of the levels analyzed and within the BV impression goal of %CV ≤6.4. Conclusion CLSI EP15-A3 guideline is an alternative troubleshooting tool that can be used to investigate and verify method precision performance before and after significant instrument maintenance.


2018 ◽  
Vol 64 (10/2018) ◽  
Author(s):  
Shukang He ◽  
Wei Wang ◽  
Haijian Zhao ◽  
Chuanbao Zhang ◽  
Falin He ◽  
...  

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Hikmet Can Çubukçu

Abstract Objectives The present study set out to build a machine learning model to incorporate conventional quality control (QC) rules, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) with random forest (RF) algorithm to achieve better performance and to evaluate the performances the models using computer simulation to aid laboratory professionals in QC procedure planning. Methods Conventional QC rules, EWMA, CUSUM, and RF models were implemented on the simulation data using an in-house algorithm. The models’ performances were evaluated on 170,000 simulated QC results using outcome metrics, including the probability of error detection (Ped), probability of false rejection (Pfr), average run length (ARL), and power graph. Results The highest Pfr (0.0404) belonged to the 1–2s rule. The 1–3s rule could not detect errors with a 0.9 Ped up to 4 SD of systematic error. The random forest model had the highest Ped for systematic errors lower than 1 SD. However, ARLs of the model require the combined utility of the RF model with conventional QC rules having lower ARLs or more than one QC measurement is required. Conclusions The RF model presented in this study showed acceptable Ped for most degrees of systematic error. The outcome metrics established in this study will help laboratory professionals planning internal QC.


2013 ◽  
Vol 66 (12) ◽  
pp. 1027-1032 ◽  
Author(s):  
Helen Kinns ◽  
Sarah Pitkin ◽  
David Housley ◽  
Danielle B Freedman

There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.


1977 ◽  
Vol 23 (10) ◽  
pp. 1857-1867 ◽  
Author(s):  
J O Westgard ◽  
T Groth ◽  
T Aronsson ◽  
H Falk ◽  
C H de Verdier

Abstract When assessing the performance of an internal quality control system, it is useful to determine the probability for false rejections (pfr) and the probability for error detection (ped). These performance characteristics are estimated here by use of a computer stimulation procedure. The control rules studied include those commonly employed with Shewhart-type control charts, a cumulative sum rule, and rules applicable when a series of control measurements are treated as a single control observation. The error situations studied include an increase in random error, a systematic shift, a systematic drift, and mixtures of these. The probability for error detection is very dependent on the number of control observations and the choice of control rules. No one rule is best for detecting all errors, thus combinations of rules are desirable. Some appropriate combinations are suggested and their performance characteristics are presented.


2008 ◽  
Vol 13 (2) ◽  
pp. 69-75 ◽  
Author(s):  
Abdurrahman Coskun ◽  
Mustafa Serteser ◽  
Arno Fraterman ◽  
Ibrahim Unsal

1979 ◽  
Vol 25 (3) ◽  
pp. 394-400 ◽  
Author(s):  
J O Westgard ◽  
H Falk ◽  
T Groth

Abstract A computer-stimulation study has been performed to determine how the performance characteristics of quality-control rules are affected by the presence of a between-run component of variation, the choice of control limits (calculated from within-run vs. total standard deviations), and the shape of the error distribution. When a between-run standard deviation (Sb) exists and control limits are calculated from the total standard deviation (St, which includes Sb as well as the within-run standard deviation, Sw), there is generally a loss in ability to detect analytical disturbances or errors. With control limits calculated from Sw, there is generally an increase in the level of false rejections. The presence of non-gaussian error distribution appears to have considerably less effect. It can be recommended that random error be controlled by use of a chi-square or range-control rule, with control limits calculated from Sw. Optimal control of systematic errors is difficult when Sb exists. An effort should be made to reduce Sb, and this will lead to increased ability to detect analytical errors. When Sb is tolerated or accepted as part of the baseline state of operation for the analytical method, then further increases in the number of control observations will be necessary to achieve a given probability for error detection.


Sign in / Sign up

Export Citation Format

Share Document