scholarly journals Systematic Error Detection in Laboratory Medicine

Author(s):  
Amir Momeni-Boroujeni ◽  
Matthew R. Pincus
2002 ◽  
Vol 48 (5) ◽  
pp. 691-698 ◽  
Author(s):  
Pierangelo Bonini ◽  
Mario Plebani ◽  
Ferruccio Ceriotti ◽  
Francesca Rubboli

Abstract Background: The problem of medical errors has recently received a great deal of attention, which will probably increase. In this minireview, we focus on this issue in the fields of laboratory medicine and blood transfusion. Methods: We conducted several MEDLINE queries and searched the literature by hand. Searches were limited to the last 8 years to identify results that were not biased by obsolete technology. In addition, data on the frequency and type of preanalytical errors in our institution were collected. Results: Our search revealed large heterogeneity in study designs and quality on this topic as well as relatively few available data and the lack of a shared definition of “laboratory error” (also referred to as “blunder”, “mistake”, “problem”, or “defect”). Despite these limitations, there was considerable concordance on the distribution of errors throughout the laboratory working process: most occurred in the pre- or postanalytical phases, whereas a minority (13–32% according to the studies) occurred in the analytical portion. The reported frequency of errors was related to how they were identified: when a careful process analysis was performed, substantially more errors were discovered than when studies relied on complaints or report of near accidents. Conclusions: The large heterogeneity of literature on laboratory errors together with the prevalence of evidence that most errors occur in the preanalytical phase suggest the implementation of a more rigorous methodology for error detection and classification and the adoption of proper technologies for error reduction. Clinical audits should be used as a tool to detect errors caused by organizational problems outside the laboratory.


1992 ◽  
Vol 38 (2) ◽  
pp. 204-210 ◽  
Author(s):  
Aristides T Hatjimihail

Abstract I have developed an interactive microcomputer simulation program for the design, comparison, and evaluation of alternative quality-control (QC) procedures. The program estimates the probabilities for rejection under different conditions of random and systematic error when these procedures are used and plots their power function graphs. It also estimates the probabilities for detection of critical errors, the defect rate, and the test yield. To allow a flexible definition of the QC procedures, it includes an interpreter. Various characteristics of the analytical process and the QC procedure can be user-defined. The program extends the concepts of the probability for error detection and of the power function to describe the results of the introduction of error between runs and within a run. The usefulness of this approach is illustrated with some examples.


1985 ◽  
Vol 31 (2) ◽  
pp. 206-212 ◽  
Author(s):  
A S Blum

Abstract I describe a program for definitive comparison of different quality-control statistical procedures. A microcomputer simulates quality-control results generated by repetitive analytical runs. It applies various statistical rules to each result, tabulating rule breaks to evaluate rules as routinely applied by the analyst. The process repeats with increasing amounts of random and systematic error. Rate of false rejection and true error detection for currently popular statistical procedures were comparatively evaluated together with a new multirule procedure described here. The nature of the analyst's response to out-of-control signals was also evaluated. A single-rule protocol that is as effective as the multirule protocol of Westgard et al. (Clin Chem 27:493, 1981) is reported.


2011 ◽  
Vol 12 (1) ◽  
pp. 25 ◽  
Author(s):  
Plamen Dragiev ◽  
Robert Nadon ◽  
Vladimir Makarenkov

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Hikmet Can Çubukçu

Abstract Objectives The present study set out to build a machine learning model to incorporate conventional quality control (QC) rules, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) with random forest (RF) algorithm to achieve better performance and to evaluate the performances the models using computer simulation to aid laboratory professionals in QC procedure planning. Methods Conventional QC rules, EWMA, CUSUM, and RF models were implemented on the simulation data using an in-house algorithm. The models’ performances were evaluated on 170,000 simulated QC results using outcome metrics, including the probability of error detection (Ped), probability of false rejection (Pfr), average run length (ARL), and power graph. Results The highest Pfr (0.0404) belonged to the 1–2s rule. The 1–3s rule could not detect errors with a 0.9 Ped up to 4 SD of systematic error. The random forest model had the highest Ped for systematic errors lower than 1 SD. However, ARLs of the model require the combined utility of the RF model with conventional QC rules having lower ARLs or more than one QC measurement is required. Conclusions The RF model presented in this study showed acceptable Ped for most degrees of systematic error. The outcome metrics established in this study will help laboratory professionals planning internal QC.


Sign in / Sign up

Export Citation Format

Share Document