scholarly journals Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety

2017 ◽  
Vol 63 (5) ◽  
pp. 1022-1030 ◽  
Author(s):  
Martín Yago

Abstract BACKGROUND QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. METHODS A statistical model was used to construct charts for the 1ks and X/χ2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. RESULTS 1ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X/χ2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. CONCLUSIONS Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision.

1997 ◽  
Vol 43 (11) ◽  
pp. 2149-2154 ◽  
Author(s):  
Curtis A Parvin ◽  
Ann M Gronowski

Abstract The performance measure traditionally used in the quality-control (QC) planning process is the probability of rejecting an analytical run when an out-of-control error condition exists. A shortcoming of this performance measure is that it doesn’t allow comparison of QC strategies that define analytical runs differently. Accommodating different analytical run definitions is straightforward if QC performance is measured in terms of the average number of patient samples to error detection, or the average number of patient samples containing an analytical error that exceeds total allowable error. By using these performance measures to investigate the impact of different analytical run definitions on QC performance demonstrates that during routine QC monitoring, the length of the interval between QC tests can have a major influence on the expected number of unacceptable results produced during the existence of an out-of-control error condition.


2016 ◽  
Vol 62 (7) ◽  
pp. 959-965 ◽  
Author(s):  
Martín Yago ◽  
Silvia Alcover

Abstract BACKGROUND According to the traditional approach to statistical QC planning, the performance of QC procedures is assessed in terms of its probability of rejecting an analytical run that contains critical size errors (PEDC). Recently, the maximum expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition [Max E(NUF)], has been proposed as an alternative QC performance measure because it is more related to the current introduction of risk management concepts for QC planning in the clinical laboratory. METHODS We used a statistical model to investigate the relationship between PEDC and Max E(NUF) for simple QC procedures widely used in clinical laboratories and to construct charts relating Max E(NUF) with the capability of the analytical process that allow for QC planning based on the risk of harm to a patient due to the report of erroneous results. RESULTS A QC procedure shows nearly the same Max E(NUF) value when used for controlling analytical processes with the same capability, and there is a close relationship between PEDC and Max E(NUF) for simple QC procedures; therefore, the value of PEDC can be estimated from the value of Max E(NUF) and vice versa. QC procedures selected by their high PEDC value are also characterized by a low value for Max E(NUF). CONCLUSIONS The PEDC value can be used for estimating the probability of patient harm, allowing for the selection of appropriate QC procedures in QC planning based on risk management.


2008 ◽  
Vol 54 (12) ◽  
pp. 2049-2054 ◽  
Author(s):  
Curtis A Parvin

Abstract Background: The traditional measure used to evaluate QC performance is the probability of rejecting an analytical run that contains a critical out-of-control error condition. The probability of rejecting an analytical run, however, is not affected by changes in QC-testing frequency. A different performance measure is necessary to assess the impact of the frequency of QC testing. Methods: I used a statistical model to define in-control and out-of-control processes, laboratory testing modes, and quality control strategies. Results: The expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition is a performance measure that is affected by changes in QC-testing frequency. I derived this measure for different out-of-control error conditions and laboratory testing modes and showed that a worst-case expected increase in the number of unacceptable patient results reported can be estimated. The laboratory thus has the ability to design QC strategies that limit the expected number of unacceptable patient results reported. Conclusions: To assess the impact of the frequency of QC testing on QC performance, it is necessary to move beyond thinking in terms of the probability of accepting or rejecting analytical runs. A performance measure based on the expected increase in the number of unacceptable patient results reported has the dual advantage of objectively assessing the impact of changes in QC-testing frequency and putting focus on the quality of reported patient results rather than the quality of laboratory batches.


2020 ◽  
Vol 58 (9) ◽  
pp. 1517-1523
Author(s):  
Martín Yago ◽  
Carolina Pla

AbstractBackgroundStatistical quality control (SQC) procedures generally use rejection limits centered on the stable mean of the results obtained for a control material by the analyzing instrument. However, for instruments with significant bias, re-centering the limits on a different value could improve the control procedures from the viewpoint of patient safety.MethodsA statistical model was used to assess the effect of shifting the rejection limits of the control procedure relative to the instrument mean on the number of erroneous results reported as a result of an increase in the systematic error of the measurement procedure due to an out-of-control condition. The behaviors of control procedures of type 1ks (k = 2, 2.5, 3) were studied when applied to analytical processes with different capabilities (σ = 3, 4, 6).ResultsFor measuring instruments with bias, shifting the rejection limits in the direction opposite to the bias improves the ability of the quality control procedure to limit the risk posed to patients in a systematic out-of-control condition. The maximum benefit is obtained when the displacement is equal to the bias of the instrument, that is, when the rejection limits are centered on the reference mean of the control material. The strategy is sensitive to error in estimating the bias. Shifting the limits more than the instrument’s bias disproportionately increases the risk to patients. This effect should be considered in SQC planning for systems running the same test on multiple instruments.ConclusionsCentering the control rule on the reference mean is a potentially useful strategy for SQC planning based on risk management for measuring instruments with significant and stable uncorrected bias. Low uncertainty in estimating bias is necessary for this approach not to be counterproductive.


1990 ◽  
Vol 36 (2) ◽  
pp. 230-233 ◽  
Author(s):  
D D Koch ◽  
J J Oryall ◽  
E F Quam ◽  
D H Feldbruegge ◽  
D E Dowd ◽  
...  

Abstract Quality-control (QC) procedures (i.e., decision rules used, numbers of control measurements collected per run) have been selected for individual tests of a multitest analyzer, to see that clinical or "medical usefulness" requirements for quality are met. The approach for designing appropriate QC procedures includes the following steps: (a) defining requirements for quality in the form of the "total allowable analytical error" for each test, (b) determining the imprecision of each measurement procedure, (c) calculating the medically important systematic and random errors for each test, and (d) assessing the probabilities for error detection and false rejection for candidate control procedures. In applying this approach to the Hitachi 737 analyzer, a design objective of 90% (or greater) detection of systematic errors was met for most tests (sodium, potassium, glucose, urea nitrogen, creatinine, phosphorus, uric acid, cholesterol, total protein, total bilirubin, gamma-glutamyltransferase, alkaline phosphatase, aspartate aminotransferase, lactate dehydrogenase) by use of 3.5s control limits with two control measurements per run (N). For the remaining tests (albumin, chloride, total CO2, calcium), requirements for QC procedures were more stringent, and 2.5s limits (with N = 2) were selected.


Methodology ◽  
2007 ◽  
Vol 3 (1) ◽  
pp. 14-23 ◽  
Author(s):  
Juan Ramon Barrada ◽  
Julio Olea ◽  
Vicente Ponsoda

Abstract. The Sympson-Hetter (1985) method provides a means of controlling maximum exposure rate of items in Computerized Adaptive Testing. Through a series of simulations, control parameters are set that mark the probability of administration of an item on being selected. This method presents two main problems: it requires a long computation time for calculating the parameters and the maximum exposure rate is slightly above the fixed limit. Van der Linden (2003) presented two alternatives which appear to solve both of the problems. The impact of these methods in the measurement accuracy has not been tested yet. We show how these methods over-restrict the exposure of some highly discriminating items and, thus, the accuracy is decreased. It also shown that, when the desired maximum exposure rate is near the minimum possible value, these methods offer an empirical maximum exposure rate clearly above the goal. A new method, based on the initial estimation of the probability of administration and the probability of selection of the items with the restricted method ( Revuelta & Ponsoda, 1998 ), is presented in this paper. It can be used with the Sympson-Hetter method and with the two van der Linden's methods. This option, when used with Sympson-Hetter, speeds the convergence of the control parameters without decreasing the accuracy.


2020 ◽  
Vol 23 (11) ◽  
pp. 1269-1290
Author(s):  
A.A. Turgaeva

Subject. This article analyzes the business processes in the insurance company, using the method of their operation with the selection of key areas of activity. Objectives. The article aims to describe certain business processes in insurance, highlighting participants, lines of activity, and the sequence of procedures. It analyzes the business process Settlement of Losses, which is one of the significant business processes in the insurance company. Methods. For the study, I used the methods of induction and deduction, analogy, and the systems approach. Results. Based on the analysis and description of business processes in the insurance company and the identification of key elements and steps in terms of the effectiveness of decisions, the article identifies the checkpoints of Entry and Exit, activity direction, and resources of the Settlement of Losses process. Conclusions. The application of the categories that split business processes makes it possible to develop step regulation for all processes and acceptable control procedures for different operations. The presented checkpoints at different steps of the business process will help identify weaknesses and eliminate them by re-checking the point.


2020 ◽  
Vol 41 (5) ◽  
pp. 604-607 ◽  
Author(s):  
Mark D. Lesher ◽  
Cory M. Hale ◽  
Dona S. S. Wijetunge ◽  
Matt R. England ◽  
Debra S. Myers ◽  
...  

AbstractWe characterized the impact of removal of the ESBL designation from microbiology reports on inpatient antibiotic prescribing. Definitive prescribing of carbapenems decreased from 48.4% to 16.1% (P = .01) and β-lactam–β-lactamase inhibitor combination increased from 19.4% to 61.3% (P = .002). Our findings confirm the importance of collaboration between microbiology and antimicrobial stewardship programs.


Sign in / Sign up

Export Citation Format

Share Document